00:00:00.001 Started by upstream project "autotest-per-patch" build number 130900 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.055 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.055 The recommended git tool is: git 00:00:00.056 using credential 00000000-0000-0000-0000-000000000002 00:00:00.057 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.111 Fetching changes from the remote Git repository 00:00:00.113 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.183 Using shallow fetch with depth 1 00:00:00.183 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.184 > git --version # timeout=10 00:00:00.252 > git --version # 'git version 2.39.2' 00:00:00.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.304 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.304 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.297 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.309 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.322 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:04.322 > git config core.sparsecheckout # timeout=10 00:00:04.333 > git read-tree -mu HEAD # timeout=10 00:00:04.348 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:04.365 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:04.365 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:04.487 [Pipeline] Start of Pipeline 00:00:04.499 [Pipeline] library 00:00:04.501 Loading library shm_lib@master 00:00:04.501 Library shm_lib@master is cached. Copying from home. 00:00:04.516 [Pipeline] node 00:00:04.528 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.529 [Pipeline] { 00:00:04.539 [Pipeline] catchError 00:00:04.540 [Pipeline] { 00:00:04.550 [Pipeline] wrap 00:00:04.560 [Pipeline] { 00:00:04.567 [Pipeline] stage 00:00:04.568 [Pipeline] { (Prologue) 00:00:04.583 [Pipeline] echo 00:00:04.585 Node: VM-host-SM38 00:00:04.590 [Pipeline] cleanWs 00:00:04.600 [WS-CLEANUP] Deleting project workspace... 00:00:04.600 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.606 [WS-CLEANUP] done 00:00:04.788 [Pipeline] setCustomBuildProperty 00:00:04.892 [Pipeline] httpRequest 00:00:05.801 [Pipeline] echo 00:00:05.803 Sorcerer 10.211.164.101 is alive 00:00:05.812 [Pipeline] retry 00:00:05.814 [Pipeline] { 00:00:05.829 [Pipeline] httpRequest 00:00:05.833 HttpMethod: GET 00:00:05.834 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:05.834 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:05.846 Response Code: HTTP/1.1 200 OK 00:00:05.847 Success: Status code 200 is in the accepted range: 200,404 00:00:05.847 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.101 [Pipeline] } 00:00:09.118 [Pipeline] // retry 00:00:09.126 [Pipeline] sh 00:00:09.404 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.416 [Pipeline] httpRequest 00:00:09.812 [Pipeline] echo 00:00:09.814 Sorcerer 10.211.164.101 is alive 00:00:09.824 [Pipeline] retry 00:00:09.826 [Pipeline] { 00:00:09.840 [Pipeline] httpRequest 00:00:09.844 HttpMethod: GET 00:00:09.845 URL: http://10.211.164.101/packages/spdk_91fca59bcb29e203aa17ccfc5010f6cf78c8ec8d.tar.gz 00:00:09.846 Sending request to url: http://10.211.164.101/packages/spdk_91fca59bcb29e203aa17ccfc5010f6cf78c8ec8d.tar.gz 00:00:09.860 Response Code: HTTP/1.1 200 OK 00:00:09.860 Success: Status code 200 is in the accepted range: 200,404 00:00:09.861 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_91fca59bcb29e203aa17ccfc5010f6cf78c8ec8d.tar.gz 00:01:12.177 [Pipeline] } 00:01:12.194 [Pipeline] // retry 00:01:12.202 [Pipeline] sh 00:01:12.480 + tar --no-same-owner -xf spdk_91fca59bcb29e203aa17ccfc5010f6cf78c8ec8d.tar.gz 00:01:15.782 [Pipeline] sh 00:01:16.061 + git -C spdk log --oneline -n5 00:01:16.061 91fca59bc lib/reduce: unlink meta file 00:01:16.061 92108e0a2 fsdev/aio: add support for null IOs 00:01:16.061 dcdab59d3 lib/reduce: Check return code of read superblock 00:01:16.061 95d9d27f7 bdev/nvme: controller failover/multipath doc change 00:01:16.061 f366dac4a bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:01:16.077 [Pipeline] writeFile 00:01:16.090 [Pipeline] sh 00:01:16.367 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:16.377 [Pipeline] sh 00:01:16.653 + cat autorun-spdk.conf 00:01:16.653 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.653 SPDK_TEST_NVME=1 00:01:16.653 SPDK_TEST_FTL=1 00:01:16.653 SPDK_TEST_ISAL=1 00:01:16.653 SPDK_RUN_ASAN=1 00:01:16.653 SPDK_RUN_UBSAN=1 00:01:16.653 SPDK_TEST_XNVME=1 00:01:16.653 SPDK_TEST_NVME_FDP=1 00:01:16.653 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.659 RUN_NIGHTLY=0 00:01:16.661 [Pipeline] } 00:01:16.675 [Pipeline] // stage 00:01:16.690 [Pipeline] stage 00:01:16.692 [Pipeline] { (Run VM) 00:01:16.704 [Pipeline] sh 00:01:17.087 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:17.087 + echo 'Start stage prepare_nvme.sh' 00:01:17.087 Start stage prepare_nvme.sh 00:01:17.087 + [[ -n 8 ]] 00:01:17.087 + disk_prefix=ex8 00:01:17.087 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:17.087 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:17.087 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:17.087 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.087 ++ SPDK_TEST_NVME=1 00:01:17.087 ++ SPDK_TEST_FTL=1 00:01:17.087 ++ SPDK_TEST_ISAL=1 00:01:17.087 ++ SPDK_RUN_ASAN=1 00:01:17.087 ++ SPDK_RUN_UBSAN=1 00:01:17.087 ++ SPDK_TEST_XNVME=1 00:01:17.087 ++ SPDK_TEST_NVME_FDP=1 00:01:17.087 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.087 ++ RUN_NIGHTLY=0 00:01:17.087 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:17.087 + nvme_files=() 00:01:17.087 + declare -A nvme_files 00:01:17.087 + backend_dir=/var/lib/libvirt/images/backends 00:01:17.087 + nvme_files['nvme.img']=5G 00:01:17.087 + nvme_files['nvme-cmb.img']=5G 00:01:17.087 + nvme_files['nvme-multi0.img']=4G 00:01:17.087 + nvme_files['nvme-multi1.img']=4G 00:01:17.087 + nvme_files['nvme-multi2.img']=4G 00:01:17.087 + nvme_files['nvme-openstack.img']=8G 00:01:17.087 + nvme_files['nvme-zns.img']=5G 00:01:17.087 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:17.087 + (( SPDK_TEST_FTL == 1 )) 00:01:17.087 + nvme_files["nvme-ftl.img"]=6G 00:01:17.087 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:17.087 + nvme_files["nvme-fdp.img"]=1G 00:01:17.087 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:17.087 + for nvme in "${!nvme_files[@]}" 00:01:17.087 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:17.343 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.343 + for nvme in "${!nvme_files[@]}" 00:01:17.343 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:01:17.907 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:17.907 + for nvme in "${!nvme_files[@]}" 00:01:17.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:17.907 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.907 + for nvme in "${!nvme_files[@]}" 00:01:17.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:17.907 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:17.907 + for nvme in "${!nvme_files[@]}" 00:01:17.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:18.163 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.163 + for nvme in "${!nvme_files[@]}" 00:01:18.163 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:18.163 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.163 + for nvme in "${!nvme_files[@]}" 00:01:18.163 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:18.728 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.728 + for nvme in "${!nvme_files[@]}" 00:01:18.728 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:18.728 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:18.728 + for nvme in "${!nvme_files[@]}" 00:01:18.728 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:19.295 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.295 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:19.295 + echo 'End stage prepare_nvme.sh' 00:01:19.295 End stage prepare_nvme.sh 00:01:19.307 [Pipeline] sh 00:01:19.591 + DISTRO=fedora39 00:01:19.591 + CPUS=10 00:01:19.591 + RAM=12288 00:01:19.591 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:19.591 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:19.591 00:01:19.591 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:19.591 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:19.591 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:19.591 HELP=0 00:01:19.591 DRY_RUN=0 00:01:19.591 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:19.591 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:19.591 NVME_AUTO_CREATE=0 00:01:19.591 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:19.591 NVME_CMB=,,,, 00:01:19.591 NVME_PMR=,,,, 00:01:19.591 NVME_ZNS=,,,, 00:01:19.591 NVME_MS=true,,,, 00:01:19.591 NVME_FDP=,,,on, 00:01:19.591 SPDK_VAGRANT_DISTRO=fedora39 00:01:19.591 SPDK_VAGRANT_VMCPU=10 00:01:19.591 SPDK_VAGRANT_VMRAM=12288 00:01:19.591 SPDK_VAGRANT_PROVIDER=libvirt 00:01:19.591 SPDK_VAGRANT_HTTP_PROXY= 00:01:19.591 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:19.591 SPDK_OPENSTACK_NETWORK=0 00:01:19.591 VAGRANT_PACKAGE_BOX=0 00:01:19.592 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:19.592 FORCE_DISTRO=true 00:01:19.592 VAGRANT_BOX_VERSION= 00:01:19.592 EXTRA_VAGRANTFILES= 00:01:19.592 NIC_MODEL=e1000 00:01:19.592 00:01:19.592 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:19.592 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:22.139 Bringing machine 'default' up with 'libvirt' provider... 00:01:22.710 ==> default: Creating image (snapshot of base box volume). 00:01:22.972 ==> default: Creating domain with the following settings... 00:01:22.972 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728378434_c1f05ff508ea84a51cee 00:01:22.972 ==> default: -- Domain type: kvm 00:01:22.972 ==> default: -- Cpus: 10 00:01:22.972 ==> default: -- Feature: acpi 00:01:22.972 ==> default: -- Feature: apic 00:01:22.972 ==> default: -- Feature: pae 00:01:22.972 ==> default: -- Memory: 12288M 00:01:22.972 ==> default: -- Memory Backing: hugepages: 00:01:22.972 ==> default: -- Management MAC: 00:01:22.972 ==> default: -- Loader: 00:01:22.972 ==> default: -- Nvram: 00:01:22.972 ==> default: -- Base box: spdk/fedora39 00:01:22.972 ==> default: -- Storage pool: default 00:01:22.972 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728378434_c1f05ff508ea84a51cee.img (20G) 00:01:22.972 ==> default: -- Volume Cache: default 00:01:22.972 ==> default: -- Kernel: 00:01:22.972 ==> default: -- Initrd: 00:01:22.972 ==> default: -- Graphics Type: vnc 00:01:22.972 ==> default: -- Graphics Port: -1 00:01:22.972 ==> default: -- Graphics IP: 127.0.0.1 00:01:22.972 ==> default: -- Graphics Password: Not defined 00:01:22.972 ==> default: -- Video Type: cirrus 00:01:22.972 ==> default: -- Video VRAM: 9216 00:01:22.972 ==> default: -- Sound Type: 00:01:22.972 ==> default: -- Keymap: en-us 00:01:22.972 ==> default: -- TPM Path: 00:01:22.972 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:22.972 ==> default: -- Command line args: 00:01:22.972 ==> default: -> value=-device, 00:01:22.972 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:22.972 ==> default: -> value=-drive, 00:01:22.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:22.972 ==> default: -> value=-device, 00:01:22.972 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:22.972 ==> default: -> value=-device, 00:01:22.972 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:22.972 ==> default: -> value=-drive, 00:01:22.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:22.972 ==> default: -> value=-device, 00:01:22.972 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.972 ==> default: -> value=-device, 00:01:22.972 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:22.972 ==> default: -> value=-drive, 00:01:22.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:22.972 ==> default: -> value=-device, 00:01:22.972 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.972 ==> default: -> value=-drive, 00:01:22.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:22.972 ==> default: -> value=-device, 00:01:22.972 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.972 ==> default: -> value=-drive, 00:01:22.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:22.972 ==> default: -> value=-device, 00:01:22.972 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.972 ==> default: -> value=-device, 00:01:22.972 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:22.972 ==> default: -> value=-device, 00:01:22.972 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:22.972 ==> default: -> value=-drive, 00:01:22.972 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:22.972 ==> default: -> value=-device, 00:01:22.972 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.234 ==> default: Creating shared folders metadata... 00:01:23.234 ==> default: Starting domain. 00:01:25.145 ==> default: Waiting for domain to get an IP address... 00:01:40.015 ==> default: Waiting for SSH to become available... 00:01:40.015 ==> default: Configuring and enabling network interfaces... 00:01:43.306 default: SSH address: 192.168.121.82:22 00:01:43.306 default: SSH username: vagrant 00:01:43.306 default: SSH auth method: private key 00:01:45.224 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:51.820 ==> default: Mounting SSHFS shared folder... 00:01:52.090 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:52.090 ==> default: Checking Mount.. 00:01:53.464 ==> default: Folder Successfully Mounted! 00:01:53.464 00:01:53.464 SUCCESS! 00:01:53.464 00:01:53.464 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:53.464 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:53.464 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:53.464 00:01:53.471 [Pipeline] } 00:01:53.486 [Pipeline] // stage 00:01:53.495 [Pipeline] dir 00:01:53.495 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:53.497 [Pipeline] { 00:01:53.510 [Pipeline] catchError 00:01:53.512 [Pipeline] { 00:01:53.524 [Pipeline] sh 00:01:53.802 + vagrant ssh-config --host vagrant 00:01:53.802 + sed -ne '/^Host/,$p' 00:01:53.802 + tee ssh_conf 00:01:56.327 Host vagrant 00:01:56.327 HostName 192.168.121.82 00:01:56.327 User vagrant 00:01:56.327 Port 22 00:01:56.327 UserKnownHostsFile /dev/null 00:01:56.327 StrictHostKeyChecking no 00:01:56.327 PasswordAuthentication no 00:01:56.327 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:56.327 IdentitiesOnly yes 00:01:56.327 LogLevel FATAL 00:01:56.327 ForwardAgent yes 00:01:56.327 ForwardX11 yes 00:01:56.327 00:01:56.340 [Pipeline] withEnv 00:01:56.342 [Pipeline] { 00:01:56.356 [Pipeline] sh 00:01:56.634 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:56.634 source /etc/os-release 00:01:56.634 [[ -e /image.version ]] && img=$(< /image.version) 00:01:56.634 # Minimal, systemd-like check. 00:01:56.634 if [[ -e /.dockerenv ]]; then 00:01:56.634 # Clear garbage from the node'\''s name: 00:01:56.634 # agt-er_autotest_547-896 -> autotest_547-896 00:01:56.634 # $HOSTNAME is the actual container id 00:01:56.634 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:56.634 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:56.634 # We can assume this is a mount from a host where container is running, 00:01:56.634 # so fetch its hostname to easily identify the target swarm worker. 00:01:56.634 container="$(< /etc/hostname) ($agent)" 00:01:56.634 else 00:01:56.634 # Fallback 00:01:56.634 container=$agent 00:01:56.634 fi 00:01:56.634 fi 00:01:56.634 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:56.634 ' 00:01:56.644 [Pipeline] } 00:01:56.660 [Pipeline] // withEnv 00:01:56.669 [Pipeline] setCustomBuildProperty 00:01:56.684 [Pipeline] stage 00:01:56.687 [Pipeline] { (Tests) 00:01:56.703 [Pipeline] sh 00:01:56.981 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:56.993 [Pipeline] sh 00:01:57.269 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:57.283 [Pipeline] timeout 00:01:57.283 Timeout set to expire in 50 min 00:01:57.285 [Pipeline] { 00:01:57.300 [Pipeline] sh 00:01:57.597 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:57.858 HEAD is now at 91fca59bc lib/reduce: unlink meta file 00:01:57.869 [Pipeline] sh 00:01:58.146 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:58.160 [Pipeline] sh 00:01:58.437 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:58.452 [Pipeline] sh 00:01:58.729 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:58.729 ++ readlink -f spdk_repo 00:01:58.729 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:58.729 + [[ -n /home/vagrant/spdk_repo ]] 00:01:58.729 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:58.729 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:58.729 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:58.729 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:58.729 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:58.729 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:58.729 + cd /home/vagrant/spdk_repo 00:01:58.729 + source /etc/os-release 00:01:58.729 ++ NAME='Fedora Linux' 00:01:58.729 ++ VERSION='39 (Cloud Edition)' 00:01:58.729 ++ ID=fedora 00:01:58.729 ++ VERSION_ID=39 00:01:58.729 ++ VERSION_CODENAME= 00:01:58.729 ++ PLATFORM_ID=platform:f39 00:01:58.729 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:58.729 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:58.729 ++ LOGO=fedora-logo-icon 00:01:58.729 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:58.729 ++ HOME_URL=https://fedoraproject.org/ 00:01:58.729 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:58.729 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:58.729 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:58.729 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:58.729 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:58.730 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:58.730 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:58.730 ++ SUPPORT_END=2024-11-12 00:01:58.730 ++ VARIANT='Cloud Edition' 00:01:58.730 ++ VARIANT_ID=cloud 00:01:58.730 + uname -a 00:01:58.730 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:58.730 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:58.987 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:59.245 Hugepages 00:01:59.245 node hugesize free / total 00:01:59.245 node0 1048576kB 0 / 0 00:01:59.245 node0 2048kB 0 / 0 00:01:59.245 00:01:59.245 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:59.504 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:59.504 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:59.504 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:59.504 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:59.504 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:59.504 + rm -f /tmp/spdk-ld-path 00:01:59.504 + source autorun-spdk.conf 00:01:59.504 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.504 ++ SPDK_TEST_NVME=1 00:01:59.504 ++ SPDK_TEST_FTL=1 00:01:59.504 ++ SPDK_TEST_ISAL=1 00:01:59.504 ++ SPDK_RUN_ASAN=1 00:01:59.504 ++ SPDK_RUN_UBSAN=1 00:01:59.504 ++ SPDK_TEST_XNVME=1 00:01:59.504 ++ SPDK_TEST_NVME_FDP=1 00:01:59.504 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.504 ++ RUN_NIGHTLY=0 00:01:59.504 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:59.504 + [[ -n '' ]] 00:01:59.504 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:59.504 + for M in /var/spdk/build-*-manifest.txt 00:01:59.504 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:59.504 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:59.504 + for M in /var/spdk/build-*-manifest.txt 00:01:59.504 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:59.504 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:59.504 + for M in /var/spdk/build-*-manifest.txt 00:01:59.504 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:59.504 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:59.504 ++ uname 00:01:59.504 + [[ Linux == \L\i\n\u\x ]] 00:01:59.504 + sudo dmesg -T 00:01:59.504 + sudo dmesg --clear 00:01:59.504 + dmesg_pid=5027 00:01:59.504 + [[ Fedora Linux == FreeBSD ]] 00:01:59.504 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:59.504 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:59.504 + sudo dmesg -Tw 00:01:59.504 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:59.504 + [[ -x /usr/src/fio-static/fio ]] 00:01:59.504 + export FIO_BIN=/usr/src/fio-static/fio 00:01:59.504 + FIO_BIN=/usr/src/fio-static/fio 00:01:59.504 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:59.504 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:59.504 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:59.504 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:59.504 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:59.504 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:59.504 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:59.504 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:59.504 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:59.504 Test configuration: 00:01:59.504 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.504 SPDK_TEST_NVME=1 00:01:59.504 SPDK_TEST_FTL=1 00:01:59.504 SPDK_TEST_ISAL=1 00:01:59.504 SPDK_RUN_ASAN=1 00:01:59.504 SPDK_RUN_UBSAN=1 00:01:59.504 SPDK_TEST_XNVME=1 00:01:59.504 SPDK_TEST_NVME_FDP=1 00:01:59.504 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.504 RUN_NIGHTLY=0 09:07:51 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:59.504 09:07:51 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:59.504 09:07:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:59.504 09:07:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:59.504 09:07:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:59.504 09:07:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:59.504 09:07:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.504 09:07:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.504 09:07:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.504 09:07:51 -- paths/export.sh@5 -- $ export PATH 00:01:59.504 09:07:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.504 09:07:51 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:59.504 09:07:51 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:59.504 09:07:51 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728378471.XXXXXX 00:01:59.504 09:07:51 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728378471.k1NxFI 00:01:59.504 09:07:51 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:59.504 09:07:51 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:59.504 09:07:51 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:59.504 09:07:51 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:59.504 09:07:51 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:59.504 09:07:51 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:59.504 09:07:51 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:59.504 09:07:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.504 09:07:51 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:59.504 09:07:51 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:59.504 09:07:51 -- pm/common@17 -- $ local monitor 00:01:59.504 09:07:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:59.504 09:07:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:59.504 09:07:51 -- pm/common@25 -- $ sleep 1 00:01:59.504 09:07:51 -- pm/common@21 -- $ date +%s 00:01:59.504 09:07:51 -- pm/common@21 -- $ date +%s 00:01:59.504 09:07:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728378471 00:01:59.504 09:07:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728378471 00:01:59.762 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728378471_collect-vmstat.pm.log 00:01:59.762 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728378471_collect-cpu-load.pm.log 00:02:00.696 09:07:52 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:00.696 09:07:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:00.696 09:07:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:00.696 09:07:52 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:00.696 09:07:52 -- spdk/autobuild.sh@16 -- $ date -u 00:02:00.696 Tue Oct 8 09:07:52 AM UTC 2024 00:02:00.696 09:07:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:00.696 v25.01-pre-42-g91fca59bc 00:02:00.696 09:07:52 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:00.696 09:07:52 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:00.696 09:07:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:00.696 09:07:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:00.696 09:07:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.696 ************************************ 00:02:00.696 START TEST asan 00:02:00.696 ************************************ 00:02:00.696 using asan 00:02:00.696 09:07:52 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:00.696 00:02:00.696 real 0m0.000s 00:02:00.696 user 0m0.000s 00:02:00.696 sys 0m0.000s 00:02:00.696 09:07:52 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:00.696 09:07:52 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:00.696 ************************************ 00:02:00.696 END TEST asan 00:02:00.696 ************************************ 00:02:00.696 09:07:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:00.696 09:07:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:00.696 09:07:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:00.696 09:07:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:00.696 09:07:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.696 ************************************ 00:02:00.696 START TEST ubsan 00:02:00.696 ************************************ 00:02:00.696 using ubsan 00:02:00.696 09:07:52 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:00.696 00:02:00.696 real 0m0.000s 00:02:00.696 user 0m0.000s 00:02:00.696 sys 0m0.000s 00:02:00.696 09:07:52 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:00.696 09:07:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:00.696 ************************************ 00:02:00.696 END TEST ubsan 00:02:00.696 ************************************ 00:02:00.696 09:07:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:00.696 09:07:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:00.696 09:07:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:00.696 09:07:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:00.696 09:07:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:00.696 09:07:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:00.696 09:07:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:00.696 09:07:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:00.696 09:07:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:00.696 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:00.696 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:01.261 Using 'verbs' RDMA provider 00:02:11.806 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:21.815 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:21.815 Creating mk/config.mk...done. 00:02:21.815 Creating mk/cc.flags.mk...done. 00:02:21.815 Type 'make' to build. 00:02:21.815 09:08:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:21.815 09:08:13 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:21.815 09:08:13 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:21.815 09:08:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.815 ************************************ 00:02:21.815 START TEST make 00:02:21.815 ************************************ 00:02:21.815 09:08:13 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:22.073 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:22.073 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:22.073 meson setup builddir \ 00:02:22.073 -Dwith-libaio=enabled \ 00:02:22.073 -Dwith-liburing=enabled \ 00:02:22.073 -Dwith-libvfn=disabled \ 00:02:22.073 -Dwith-spdk=false && \ 00:02:22.073 meson compile -C builddir && \ 00:02:22.073 cd -) 00:02:22.073 make[1]: Nothing to be done for 'all'. 00:02:24.602 The Meson build system 00:02:24.602 Version: 1.5.0 00:02:24.602 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:24.602 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:24.602 Build type: native build 00:02:24.602 Project name: xnvme 00:02:24.602 Project version: 0.7.3 00:02:24.602 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:24.602 C linker for the host machine: cc ld.bfd 2.40-14 00:02:24.602 Host machine cpu family: x86_64 00:02:24.602 Host machine cpu: x86_64 00:02:24.602 Message: host_machine.system: linux 00:02:24.602 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:24.602 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:24.602 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:24.602 Run-time dependency threads found: YES 00:02:24.602 Has header "setupapi.h" : NO 00:02:24.602 Has header "linux/blkzoned.h" : YES 00:02:24.602 Has header "linux/blkzoned.h" : YES (cached) 00:02:24.602 Has header "libaio.h" : YES 00:02:24.602 Library aio found: YES 00:02:24.602 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:24.602 Run-time dependency liburing found: YES 2.2 00:02:24.602 Dependency libvfn skipped: feature with-libvfn disabled 00:02:24.602 Run-time dependency appleframeworks found: NO (tried framework) 00:02:24.602 Run-time dependency appleframeworks found: NO (tried framework) 00:02:24.602 Configuring xnvme_config.h using configuration 00:02:24.602 Configuring xnvme.spec using configuration 00:02:24.602 Run-time dependency bash-completion found: YES 2.11 00:02:24.602 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:24.602 Program cp found: YES (/usr/bin/cp) 00:02:24.602 Has header "winsock2.h" : NO 00:02:24.602 Has header "dbghelp.h" : NO 00:02:24.602 Library rpcrt4 found: NO 00:02:24.602 Library rt found: YES 00:02:24.602 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:24.602 Found CMake: /usr/bin/cmake (3.27.7) 00:02:24.602 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:24.602 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:24.602 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:24.602 Build targets in project: 32 00:02:24.602 00:02:24.602 xnvme 0.7.3 00:02:24.602 00:02:24.602 User defined options 00:02:24.602 with-libaio : enabled 00:02:24.602 with-liburing: enabled 00:02:24.602 with-libvfn : disabled 00:02:24.602 with-spdk : false 00:02:24.602 00:02:24.602 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:24.602 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:24.602 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:24.602 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:24.602 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:24.602 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:24.602 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:24.602 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:24.602 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:24.602 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:24.602 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:24.602 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:24.602 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:24.861 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:24.861 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:24.861 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:24.861 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:24.861 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:24.861 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:24.861 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:24.861 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:24.861 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:24.861 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:24.861 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:24.861 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:24.861 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:24.861 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:24.861 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:24.861 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:24.861 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:24.861 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:24.861 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:24.861 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:24.861 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:24.861 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:24.861 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:24.861 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:24.861 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:24.861 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:24.861 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:24.861 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:24.861 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:24.861 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:24.861 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:24.861 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:24.861 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:24.861 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:24.861 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:24.861 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:24.861 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:24.861 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:25.120 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:25.120 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:25.120 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:25.120 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:25.120 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:25.120 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:25.120 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:25.120 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:25.120 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:25.120 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:25.120 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:25.120 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:25.120 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:25.120 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:25.120 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:25.120 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:25.120 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:25.120 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:25.120 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:25.120 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:25.379 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:25.379 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:25.379 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:25.379 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:25.379 [74/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:25.379 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:25.379 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:25.379 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:25.379 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:25.379 [79/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:25.379 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:25.379 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:25.379 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:25.379 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:25.379 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:25.379 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:25.379 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:25.379 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:25.379 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:25.379 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:25.379 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:25.379 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:25.638 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:25.638 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:25.638 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:25.638 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:25.638 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:25.638 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:25.638 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:25.638 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:25.638 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:25.638 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:25.638 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:25.638 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:25.638 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:25.638 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:25.638 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:25.638 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:25.638 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:25.638 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:25.638 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:25.638 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:25.638 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:25.638 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:25.638 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:25.638 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:25.638 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:25.638 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:25.638 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:25.638 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:25.638 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:25.638 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:25.638 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:25.638 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:25.638 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:25.638 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:25.638 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:25.638 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:25.638 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:25.896 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:25.896 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:25.896 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:25.896 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:25.896 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:25.896 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:25.896 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:25.896 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:25.896 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:25.896 [138/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:25.896 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:25.896 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:25.896 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:25.896 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:25.896 [143/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:25.896 [144/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:25.896 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:25.896 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:26.155 [147/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:26.155 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:26.155 [149/203] Linking target lib/libxnvme.so 00:02:26.155 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:26.155 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:26.155 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:26.155 [153/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:26.155 [154/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:26.155 [155/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:26.155 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:26.155 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:26.155 [158/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:26.155 [159/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:26.155 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:26.155 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:26.155 [162/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:26.155 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:26.155 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:26.155 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:26.155 [166/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:26.414 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:26.414 [168/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:26.414 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:26.414 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:26.414 [171/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:26.414 [172/203] Linking static target lib/libxnvme.a 00:02:26.414 [173/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:26.414 [174/203] Linking target tests/xnvme_tests_async_intf 00:02:26.414 [175/203] Linking target tests/xnvme_tests_enum 00:02:26.414 [176/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:26.414 [177/203] Linking target tests/xnvme_tests_buf 00:02:26.414 [178/203] Linking target tests/xnvme_tests_cli 00:02:26.414 [179/203] Linking target tests/xnvme_tests_ioworker 00:02:26.414 [180/203] Linking target tests/xnvme_tests_scc 00:02:26.414 [181/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:26.414 [182/203] Linking target tests/xnvme_tests_lblk 00:02:26.414 [183/203] Linking target tests/xnvme_tests_xnvme_file 00:02:26.414 [184/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:26.414 [185/203] Linking target tests/xnvme_tests_map 00:02:26.414 [186/203] Linking target tests/xnvme_tests_znd_state 00:02:26.414 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:26.414 [188/203] Linking target tools/xdd 00:02:26.414 [189/203] Linking target tests/xnvme_tests_znd_append 00:02:26.414 [190/203] Linking target tools/lblk 00:02:26.414 [191/203] Linking target tools/xnvme 00:02:26.414 [192/203] Linking target tests/xnvme_tests_kvs 00:02:26.414 [193/203] Linking target tools/xnvme_file 00:02:26.414 [194/203] Linking target tools/kvs 00:02:26.414 [195/203] Linking target examples/xnvme_enum 00:02:26.672 [196/203] Linking target tools/zoned 00:02:26.672 [197/203] Linking target examples/xnvme_dev 00:02:26.672 [198/203] Linking target examples/xnvme_io_async 00:02:26.672 [199/203] Linking target examples/zoned_io_async 00:02:26.672 [200/203] Linking target examples/xnvme_single_async 00:02:26.672 [201/203] Linking target examples/zoned_io_sync 00:02:26.672 [202/203] Linking target examples/xnvme_hello 00:02:26.672 [203/203] Linking target examples/xnvme_single_sync 00:02:26.672 INFO: autodetecting backend as ninja 00:02:26.672 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:26.672 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:31.936 The Meson build system 00:02:31.936 Version: 1.5.0 00:02:31.936 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:31.936 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:31.936 Build type: native build 00:02:31.936 Program cat found: YES (/usr/bin/cat) 00:02:31.936 Project name: DPDK 00:02:31.936 Project version: 24.03.0 00:02:31.936 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:31.936 C linker for the host machine: cc ld.bfd 2.40-14 00:02:31.936 Host machine cpu family: x86_64 00:02:31.936 Host machine cpu: x86_64 00:02:31.936 Message: ## Building in Developer Mode ## 00:02:31.936 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:31.936 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:31.936 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:31.936 Program python3 found: YES (/usr/bin/python3) 00:02:31.936 Program cat found: YES (/usr/bin/cat) 00:02:31.936 Compiler for C supports arguments -march=native: YES 00:02:31.936 Checking for size of "void *" : 8 00:02:31.936 Checking for size of "void *" : 8 (cached) 00:02:31.936 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:31.936 Library m found: YES 00:02:31.936 Library numa found: YES 00:02:31.936 Has header "numaif.h" : YES 00:02:31.936 Library fdt found: NO 00:02:31.936 Library execinfo found: NO 00:02:31.936 Has header "execinfo.h" : YES 00:02:31.936 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:31.936 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:31.936 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:31.936 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:31.936 Run-time dependency openssl found: YES 3.1.1 00:02:31.936 Run-time dependency libpcap found: YES 1.10.4 00:02:31.936 Has header "pcap.h" with dependency libpcap: YES 00:02:31.936 Compiler for C supports arguments -Wcast-qual: YES 00:02:31.936 Compiler for C supports arguments -Wdeprecated: YES 00:02:31.936 Compiler for C supports arguments -Wformat: YES 00:02:31.936 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:31.936 Compiler for C supports arguments -Wformat-security: NO 00:02:31.936 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.936 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:31.936 Compiler for C supports arguments -Wnested-externs: YES 00:02:31.936 Compiler for C supports arguments -Wold-style-definition: YES 00:02:31.936 Compiler for C supports arguments -Wpointer-arith: YES 00:02:31.936 Compiler for C supports arguments -Wsign-compare: YES 00:02:31.936 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:31.936 Compiler for C supports arguments -Wundef: YES 00:02:31.936 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.936 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:31.936 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:31.936 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.936 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:31.936 Program objdump found: YES (/usr/bin/objdump) 00:02:31.936 Compiler for C supports arguments -mavx512f: YES 00:02:31.936 Checking if "AVX512 checking" compiles: YES 00:02:31.936 Fetching value of define "__SSE4_2__" : 1 00:02:31.936 Fetching value of define "__AES__" : 1 00:02:31.936 Fetching value of define "__AVX__" : 1 00:02:31.936 Fetching value of define "__AVX2__" : 1 00:02:31.936 Fetching value of define "__AVX512BW__" : 1 00:02:31.936 Fetching value of define "__AVX512CD__" : 1 00:02:31.936 Fetching value of define "__AVX512DQ__" : 1 00:02:31.936 Fetching value of define "__AVX512F__" : 1 00:02:31.936 Fetching value of define "__AVX512VL__" : 1 00:02:31.936 Fetching value of define "__PCLMUL__" : 1 00:02:31.936 Fetching value of define "__RDRND__" : 1 00:02:31.936 Fetching value of define "__RDSEED__" : 1 00:02:31.936 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:31.936 Fetching value of define "__znver1__" : (undefined) 00:02:31.936 Fetching value of define "__znver2__" : (undefined) 00:02:31.936 Fetching value of define "__znver3__" : (undefined) 00:02:31.936 Fetching value of define "__znver4__" : (undefined) 00:02:31.936 Library asan found: YES 00:02:31.936 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:31.936 Message: lib/log: Defining dependency "log" 00:02:31.936 Message: lib/kvargs: Defining dependency "kvargs" 00:02:31.936 Message: lib/telemetry: Defining dependency "telemetry" 00:02:31.936 Library rt found: YES 00:02:31.936 Checking for function "getentropy" : NO 00:02:31.936 Message: lib/eal: Defining dependency "eal" 00:02:31.936 Message: lib/ring: Defining dependency "ring" 00:02:31.936 Message: lib/rcu: Defining dependency "rcu" 00:02:31.936 Message: lib/mempool: Defining dependency "mempool" 00:02:31.936 Message: lib/mbuf: Defining dependency "mbuf" 00:02:31.936 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:31.936 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:31.936 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:31.936 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:31.936 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:31.936 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:31.936 Compiler for C supports arguments -mpclmul: YES 00:02:31.936 Compiler for C supports arguments -maes: YES 00:02:31.936 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:31.936 Compiler for C supports arguments -mavx512bw: YES 00:02:31.936 Compiler for C supports arguments -mavx512dq: YES 00:02:31.936 Compiler for C supports arguments -mavx512vl: YES 00:02:31.936 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:31.936 Compiler for C supports arguments -mavx2: YES 00:02:31.936 Compiler for C supports arguments -mavx: YES 00:02:31.936 Message: lib/net: Defining dependency "net" 00:02:31.936 Message: lib/meter: Defining dependency "meter" 00:02:31.936 Message: lib/ethdev: Defining dependency "ethdev" 00:02:31.936 Message: lib/pci: Defining dependency "pci" 00:02:31.936 Message: lib/cmdline: Defining dependency "cmdline" 00:02:31.936 Message: lib/hash: Defining dependency "hash" 00:02:31.936 Message: lib/timer: Defining dependency "timer" 00:02:31.936 Message: lib/compressdev: Defining dependency "compressdev" 00:02:31.936 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:31.936 Message: lib/dmadev: Defining dependency "dmadev" 00:02:31.936 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:31.937 Message: lib/power: Defining dependency "power" 00:02:31.937 Message: lib/reorder: Defining dependency "reorder" 00:02:31.937 Message: lib/security: Defining dependency "security" 00:02:31.937 Has header "linux/userfaultfd.h" : YES 00:02:31.937 Has header "linux/vduse.h" : YES 00:02:31.937 Message: lib/vhost: Defining dependency "vhost" 00:02:31.937 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:31.937 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:31.937 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:31.937 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:31.937 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:31.937 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:31.937 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:31.937 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:31.937 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:31.937 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:31.937 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:31.937 Configuring doxy-api-html.conf using configuration 00:02:31.937 Configuring doxy-api-man.conf using configuration 00:02:31.937 Program mandb found: YES (/usr/bin/mandb) 00:02:31.937 Program sphinx-build found: NO 00:02:31.937 Configuring rte_build_config.h using configuration 00:02:31.937 Message: 00:02:31.937 ================= 00:02:31.937 Applications Enabled 00:02:31.937 ================= 00:02:31.937 00:02:31.937 apps: 00:02:31.937 00:02:31.937 00:02:31.937 Message: 00:02:31.937 ================= 00:02:31.937 Libraries Enabled 00:02:31.937 ================= 00:02:31.937 00:02:31.937 libs: 00:02:31.937 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:31.937 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:31.937 cryptodev, dmadev, power, reorder, security, vhost, 00:02:31.937 00:02:31.937 Message: 00:02:31.937 =============== 00:02:31.937 Drivers Enabled 00:02:31.937 =============== 00:02:31.937 00:02:31.937 common: 00:02:31.937 00:02:31.937 bus: 00:02:31.937 pci, vdev, 00:02:31.937 mempool: 00:02:31.937 ring, 00:02:31.937 dma: 00:02:31.937 00:02:31.937 net: 00:02:31.937 00:02:31.937 crypto: 00:02:31.937 00:02:31.937 compress: 00:02:31.937 00:02:31.937 vdpa: 00:02:31.937 00:02:31.937 00:02:31.937 Message: 00:02:31.937 ================= 00:02:31.937 Content Skipped 00:02:31.937 ================= 00:02:31.937 00:02:31.937 apps: 00:02:31.937 dumpcap: explicitly disabled via build config 00:02:31.937 graph: explicitly disabled via build config 00:02:31.937 pdump: explicitly disabled via build config 00:02:31.937 proc-info: explicitly disabled via build config 00:02:31.937 test-acl: explicitly disabled via build config 00:02:31.937 test-bbdev: explicitly disabled via build config 00:02:31.937 test-cmdline: explicitly disabled via build config 00:02:31.937 test-compress-perf: explicitly disabled via build config 00:02:31.937 test-crypto-perf: explicitly disabled via build config 00:02:31.937 test-dma-perf: explicitly disabled via build config 00:02:31.937 test-eventdev: explicitly disabled via build config 00:02:31.937 test-fib: explicitly disabled via build config 00:02:31.937 test-flow-perf: explicitly disabled via build config 00:02:31.937 test-gpudev: explicitly disabled via build config 00:02:31.937 test-mldev: explicitly disabled via build config 00:02:31.937 test-pipeline: explicitly disabled via build config 00:02:31.937 test-pmd: explicitly disabled via build config 00:02:31.937 test-regex: explicitly disabled via build config 00:02:31.937 test-sad: explicitly disabled via build config 00:02:31.937 test-security-perf: explicitly disabled via build config 00:02:31.937 00:02:31.937 libs: 00:02:31.937 argparse: explicitly disabled via build config 00:02:31.937 metrics: explicitly disabled via build config 00:02:31.937 acl: explicitly disabled via build config 00:02:31.937 bbdev: explicitly disabled via build config 00:02:31.937 bitratestats: explicitly disabled via build config 00:02:31.937 bpf: explicitly disabled via build config 00:02:31.937 cfgfile: explicitly disabled via build config 00:02:31.937 distributor: explicitly disabled via build config 00:02:31.937 efd: explicitly disabled via build config 00:02:31.937 eventdev: explicitly disabled via build config 00:02:31.937 dispatcher: explicitly disabled via build config 00:02:31.937 gpudev: explicitly disabled via build config 00:02:31.937 gro: explicitly disabled via build config 00:02:31.937 gso: explicitly disabled via build config 00:02:31.937 ip_frag: explicitly disabled via build config 00:02:31.937 jobstats: explicitly disabled via build config 00:02:31.937 latencystats: explicitly disabled via build config 00:02:31.937 lpm: explicitly disabled via build config 00:02:31.937 member: explicitly disabled via build config 00:02:31.937 pcapng: explicitly disabled via build config 00:02:31.937 rawdev: explicitly disabled via build config 00:02:31.937 regexdev: explicitly disabled via build config 00:02:31.937 mldev: explicitly disabled via build config 00:02:31.937 rib: explicitly disabled via build config 00:02:31.937 sched: explicitly disabled via build config 00:02:31.937 stack: explicitly disabled via build config 00:02:31.937 ipsec: explicitly disabled via build config 00:02:31.937 pdcp: explicitly disabled via build config 00:02:31.937 fib: explicitly disabled via build config 00:02:31.937 port: explicitly disabled via build config 00:02:31.937 pdump: explicitly disabled via build config 00:02:31.937 table: explicitly disabled via build config 00:02:31.937 pipeline: explicitly disabled via build config 00:02:31.937 graph: explicitly disabled via build config 00:02:31.937 node: explicitly disabled via build config 00:02:31.937 00:02:31.937 drivers: 00:02:31.937 common/cpt: not in enabled drivers build config 00:02:31.937 common/dpaax: not in enabled drivers build config 00:02:31.937 common/iavf: not in enabled drivers build config 00:02:31.937 common/idpf: not in enabled drivers build config 00:02:31.937 common/ionic: not in enabled drivers build config 00:02:31.937 common/mvep: not in enabled drivers build config 00:02:31.937 common/octeontx: not in enabled drivers build config 00:02:31.937 bus/auxiliary: not in enabled drivers build config 00:02:31.937 bus/cdx: not in enabled drivers build config 00:02:31.937 bus/dpaa: not in enabled drivers build config 00:02:31.937 bus/fslmc: not in enabled drivers build config 00:02:31.937 bus/ifpga: not in enabled drivers build config 00:02:31.937 bus/platform: not in enabled drivers build config 00:02:31.937 bus/uacce: not in enabled drivers build config 00:02:31.937 bus/vmbus: not in enabled drivers build config 00:02:31.937 common/cnxk: not in enabled drivers build config 00:02:31.937 common/mlx5: not in enabled drivers build config 00:02:31.937 common/nfp: not in enabled drivers build config 00:02:31.937 common/nitrox: not in enabled drivers build config 00:02:31.937 common/qat: not in enabled drivers build config 00:02:31.937 common/sfc_efx: not in enabled drivers build config 00:02:31.937 mempool/bucket: not in enabled drivers build config 00:02:31.937 mempool/cnxk: not in enabled drivers build config 00:02:31.937 mempool/dpaa: not in enabled drivers build config 00:02:31.937 mempool/dpaa2: not in enabled drivers build config 00:02:31.937 mempool/octeontx: not in enabled drivers build config 00:02:31.937 mempool/stack: not in enabled drivers build config 00:02:31.937 dma/cnxk: not in enabled drivers build config 00:02:31.937 dma/dpaa: not in enabled drivers build config 00:02:31.937 dma/dpaa2: not in enabled drivers build config 00:02:31.937 dma/hisilicon: not in enabled drivers build config 00:02:31.937 dma/idxd: not in enabled drivers build config 00:02:31.937 dma/ioat: not in enabled drivers build config 00:02:31.938 dma/skeleton: not in enabled drivers build config 00:02:31.938 net/af_packet: not in enabled drivers build config 00:02:31.938 net/af_xdp: not in enabled drivers build config 00:02:31.938 net/ark: not in enabled drivers build config 00:02:31.938 net/atlantic: not in enabled drivers build config 00:02:31.938 net/avp: not in enabled drivers build config 00:02:31.938 net/axgbe: not in enabled drivers build config 00:02:31.938 net/bnx2x: not in enabled drivers build config 00:02:31.938 net/bnxt: not in enabled drivers build config 00:02:31.938 net/bonding: not in enabled drivers build config 00:02:31.938 net/cnxk: not in enabled drivers build config 00:02:31.938 net/cpfl: not in enabled drivers build config 00:02:31.938 net/cxgbe: not in enabled drivers build config 00:02:31.938 net/dpaa: not in enabled drivers build config 00:02:31.938 net/dpaa2: not in enabled drivers build config 00:02:31.938 net/e1000: not in enabled drivers build config 00:02:31.938 net/ena: not in enabled drivers build config 00:02:31.938 net/enetc: not in enabled drivers build config 00:02:31.938 net/enetfec: not in enabled drivers build config 00:02:31.938 net/enic: not in enabled drivers build config 00:02:31.938 net/failsafe: not in enabled drivers build config 00:02:31.938 net/fm10k: not in enabled drivers build config 00:02:31.938 net/gve: not in enabled drivers build config 00:02:31.938 net/hinic: not in enabled drivers build config 00:02:31.938 net/hns3: not in enabled drivers build config 00:02:31.938 net/i40e: not in enabled drivers build config 00:02:31.938 net/iavf: not in enabled drivers build config 00:02:31.938 net/ice: not in enabled drivers build config 00:02:31.938 net/idpf: not in enabled drivers build config 00:02:31.938 net/igc: not in enabled drivers build config 00:02:31.938 net/ionic: not in enabled drivers build config 00:02:31.938 net/ipn3ke: not in enabled drivers build config 00:02:31.938 net/ixgbe: not in enabled drivers build config 00:02:31.938 net/mana: not in enabled drivers build config 00:02:31.938 net/memif: not in enabled drivers build config 00:02:31.938 net/mlx4: not in enabled drivers build config 00:02:31.938 net/mlx5: not in enabled drivers build config 00:02:31.938 net/mvneta: not in enabled drivers build config 00:02:31.938 net/mvpp2: not in enabled drivers build config 00:02:31.938 net/netvsc: not in enabled drivers build config 00:02:31.938 net/nfb: not in enabled drivers build config 00:02:31.938 net/nfp: not in enabled drivers build config 00:02:31.938 net/ngbe: not in enabled drivers build config 00:02:31.938 net/null: not in enabled drivers build config 00:02:31.938 net/octeontx: not in enabled drivers build config 00:02:31.938 net/octeon_ep: not in enabled drivers build config 00:02:31.938 net/pcap: not in enabled drivers build config 00:02:31.938 net/pfe: not in enabled drivers build config 00:02:31.938 net/qede: not in enabled drivers build config 00:02:31.938 net/ring: not in enabled drivers build config 00:02:31.938 net/sfc: not in enabled drivers build config 00:02:31.938 net/softnic: not in enabled drivers build config 00:02:31.938 net/tap: not in enabled drivers build config 00:02:31.938 net/thunderx: not in enabled drivers build config 00:02:31.938 net/txgbe: not in enabled drivers build config 00:02:31.938 net/vdev_netvsc: not in enabled drivers build config 00:02:31.938 net/vhost: not in enabled drivers build config 00:02:31.938 net/virtio: not in enabled drivers build config 00:02:31.938 net/vmxnet3: not in enabled drivers build config 00:02:31.938 raw/*: missing internal dependency, "rawdev" 00:02:31.938 crypto/armv8: not in enabled drivers build config 00:02:31.938 crypto/bcmfs: not in enabled drivers build config 00:02:31.938 crypto/caam_jr: not in enabled drivers build config 00:02:31.938 crypto/ccp: not in enabled drivers build config 00:02:31.938 crypto/cnxk: not in enabled drivers build config 00:02:31.938 crypto/dpaa_sec: not in enabled drivers build config 00:02:31.938 crypto/dpaa2_sec: not in enabled drivers build config 00:02:31.938 crypto/ipsec_mb: not in enabled drivers build config 00:02:31.938 crypto/mlx5: not in enabled drivers build config 00:02:31.938 crypto/mvsam: not in enabled drivers build config 00:02:31.938 crypto/nitrox: not in enabled drivers build config 00:02:31.938 crypto/null: not in enabled drivers build config 00:02:31.938 crypto/octeontx: not in enabled drivers build config 00:02:31.938 crypto/openssl: not in enabled drivers build config 00:02:31.938 crypto/scheduler: not in enabled drivers build config 00:02:31.938 crypto/uadk: not in enabled drivers build config 00:02:31.938 crypto/virtio: not in enabled drivers build config 00:02:31.938 compress/isal: not in enabled drivers build config 00:02:31.938 compress/mlx5: not in enabled drivers build config 00:02:31.938 compress/nitrox: not in enabled drivers build config 00:02:31.938 compress/octeontx: not in enabled drivers build config 00:02:31.938 compress/zlib: not in enabled drivers build config 00:02:31.938 regex/*: missing internal dependency, "regexdev" 00:02:31.938 ml/*: missing internal dependency, "mldev" 00:02:31.938 vdpa/ifc: not in enabled drivers build config 00:02:31.938 vdpa/mlx5: not in enabled drivers build config 00:02:31.938 vdpa/nfp: not in enabled drivers build config 00:02:31.938 vdpa/sfc: not in enabled drivers build config 00:02:31.938 event/*: missing internal dependency, "eventdev" 00:02:31.938 baseband/*: missing internal dependency, "bbdev" 00:02:31.938 gpu/*: missing internal dependency, "gpudev" 00:02:31.938 00:02:31.938 00:02:31.938 Build targets in project: 84 00:02:31.938 00:02:31.938 DPDK 24.03.0 00:02:31.938 00:02:31.938 User defined options 00:02:31.938 buildtype : debug 00:02:31.938 default_library : shared 00:02:31.938 libdir : lib 00:02:31.938 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:31.938 b_sanitize : address 00:02:31.938 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:31.938 c_link_args : 00:02:31.938 cpu_instruction_set: native 00:02:31.938 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:31.938 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:31.938 enable_docs : false 00:02:31.938 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:31.938 enable_kmods : false 00:02:31.938 max_lcores : 128 00:02:31.938 tests : false 00:02:31.938 00:02:31.938 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.196 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:32.196 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:32.196 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:32.196 [3/267] Linking static target lib/librte_kvargs.a 00:02:32.455 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:32.455 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:32.455 [6/267] Linking static target lib/librte_log.a 00:02:32.455 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:32.455 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:32.721 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:32.721 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:32.721 [11/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.721 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:32.722 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:32.722 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:32.722 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:32.722 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:32.722 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:32.722 [18/267] Linking static target lib/librte_telemetry.a 00:02:32.979 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:32.979 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:32.979 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:33.236 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:33.236 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:33.236 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:33.236 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:33.236 [26/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.236 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:33.236 [28/267] Linking target lib/librte_log.so.24.1 00:02:33.236 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:33.494 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:33.494 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:33.494 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:33.494 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:33.494 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.494 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:33.494 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:33.494 [37/267] Linking target lib/librte_telemetry.so.24.1 00:02:33.494 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:33.494 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:33.494 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:33.494 [41/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:33.752 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:33.752 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:33.752 [44/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:33.752 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:33.752 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:33.752 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:34.009 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:34.009 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:34.009 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:34.009 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:34.009 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:34.009 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:34.009 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:34.268 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:34.268 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:34.268 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:34.268 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:34.268 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:34.268 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:34.268 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:34.527 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:34.527 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:34.527 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:34.527 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:34.527 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:34.527 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:34.785 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:34.785 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:34.785 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:34.785 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:34.785 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:34.785 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:34.785 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:34.785 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:35.042 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:35.042 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:35.042 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:35.042 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:35.042 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:35.042 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:35.042 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:35.300 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:35.300 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:35.300 [85/267] Linking static target lib/librte_ring.a 00:02:35.300 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:35.559 [87/267] Linking static target lib/librte_eal.a 00:02:35.559 [88/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:35.559 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:35.559 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:35.559 [91/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.559 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:35.817 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:35.817 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:35.817 [95/267] Linking static target lib/librte_mempool.a 00:02:35.817 [96/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:35.817 [97/267] Linking static target lib/librte_rcu.a 00:02:36.078 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:36.078 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:36.078 [100/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:36.078 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:36.078 [102/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:36.344 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:36.344 [104/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.344 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:36.344 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:36.344 [107/267] Linking static target lib/librte_mbuf.a 00:02:36.344 [108/267] Linking static target lib/librte_meter.a 00:02:36.344 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:36.344 [110/267] Linking static target lib/librte_net.a 00:02:36.602 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:36.602 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:36.602 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:36.602 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:36.602 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.602 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.867 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.867 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:37.125 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:37.125 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.125 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:37.125 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:37.384 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:37.384 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:37.384 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:37.384 [126/267] Linking static target lib/librte_pci.a 00:02:37.384 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:37.384 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:37.384 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:37.384 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:37.643 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:37.643 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:37.643 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:37.643 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:37.643 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:37.643 [136/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.643 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:37.643 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:37.643 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:37.643 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:37.643 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:37.643 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:37.643 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:37.902 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:37.902 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:37.902 [146/267] Linking static target lib/librte_cmdline.a 00:02:37.902 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:38.160 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:38.160 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:38.160 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:38.160 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:38.160 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:38.160 [153/267] Linking static target lib/librte_timer.a 00:02:38.419 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:38.419 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:38.419 [156/267] Linking static target lib/librte_ethdev.a 00:02:38.419 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:38.419 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:38.419 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:38.419 [160/267] Linking static target lib/librte_compressdev.a 00:02:38.419 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:38.678 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:38.678 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:38.678 [164/267] Linking static target lib/librte_hash.a 00:02:38.678 [165/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.936 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:38.936 [167/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:38.936 [168/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:38.936 [169/267] Linking static target lib/librte_dmadev.a 00:02:38.936 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:38.936 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:39.195 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:39.195 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.195 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.195 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:39.453 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:39.453 [177/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:39.453 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:39.453 [179/267] Linking static target lib/librte_cryptodev.a 00:02:39.453 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:39.453 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.453 [182/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.453 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.711 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:39.711 [185/267] Linking static target lib/librte_power.a 00:02:39.711 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:39.711 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:39.711 [188/267] Linking static target lib/librte_reorder.a 00:02:39.969 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:39.969 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.969 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.969 [192/267] Linking static target lib/librte_security.a 00:02:40.228 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.228 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:40.490 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.490 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:40.490 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.490 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.490 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:40.749 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:40.749 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:40.749 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:41.007 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:41.007 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:41.007 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:41.007 [206/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:41.007 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:41.007 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:41.265 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:41.265 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.265 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:41.265 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.265 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.265 [214/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:41.265 [215/267] Linking static target drivers/librte_bus_vdev.a 00:02:41.265 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.265 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.523 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:41.523 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:41.523 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:41.523 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.523 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:41.523 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.523 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.523 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:41.782 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.041 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:42.976 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.976 [229/267] Linking target lib/librte_eal.so.24.1 00:02:43.234 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:43.234 [231/267] Linking target lib/librte_meter.so.24.1 00:02:43.234 [232/267] Linking target lib/librte_dmadev.so.24.1 00:02:43.234 [233/267] Linking target lib/librte_pci.so.24.1 00:02:43.234 [234/267] Linking target lib/librte_timer.so.24.1 00:02:43.234 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:43.234 [236/267] Linking target lib/librte_ring.so.24.1 00:02:43.234 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:43.234 [238/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:43.234 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:43.234 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:43.234 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:43.234 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:43.234 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:43.234 [244/267] Linking target lib/librte_mempool.so.24.1 00:02:43.492 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:43.492 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:43.492 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:43.492 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:43.492 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:43.749 [250/267] Linking target lib/librte_cryptodev.so.24.1 00:02:43.749 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:43.749 [252/267] Linking target lib/librte_compressdev.so.24.1 00:02:43.749 [253/267] Linking target lib/librte_net.so.24.1 00:02:43.749 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:43.749 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:43.749 [256/267] Linking target lib/librte_security.so.24.1 00:02:43.749 [257/267] Linking target lib/librte_hash.so.24.1 00:02:43.749 [258/267] Linking target lib/librte_cmdline.so.24.1 00:02:44.007 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:44.007 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.007 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:44.265 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:44.265 [263/267] Linking target lib/librte_power.so.24.1 00:02:45.199 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:45.199 [265/267] Linking static target lib/librte_vhost.a 00:02:46.132 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.390 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:46.390 INFO: autodetecting backend as ninja 00:02:46.390 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:01.296 CC lib/ut_mock/mock.o 00:03:01.296 CC lib/log/log.o 00:03:01.296 CC lib/log/log_flags.o 00:03:01.296 CC lib/log/log_deprecated.o 00:03:01.296 CC lib/ut/ut.o 00:03:01.296 LIB libspdk_log.a 00:03:01.296 LIB libspdk_ut.a 00:03:01.296 LIB libspdk_ut_mock.a 00:03:01.296 SO libspdk_ut_mock.so.6.0 00:03:01.296 SO libspdk_ut.so.2.0 00:03:01.296 SO libspdk_log.so.7.0 00:03:01.296 SYMLINK libspdk_ut_mock.so 00:03:01.296 SYMLINK libspdk_log.so 00:03:01.296 SYMLINK libspdk_ut.so 00:03:01.296 CC lib/util/base64.o 00:03:01.296 CC lib/util/bit_array.o 00:03:01.296 CC lib/util/cpuset.o 00:03:01.296 CC lib/util/crc16.o 00:03:01.296 CC lib/util/crc32.o 00:03:01.296 CC lib/util/crc32c.o 00:03:01.296 CC lib/ioat/ioat.o 00:03:01.296 CC lib/dma/dma.o 00:03:01.296 CXX lib/trace_parser/trace.o 00:03:01.296 CC lib/vfio_user/host/vfio_user_pci.o 00:03:01.296 CC lib/util/crc32_ieee.o 00:03:01.296 CC lib/util/crc64.o 00:03:01.296 CC lib/util/dif.o 00:03:01.296 CC lib/util/fd.o 00:03:01.296 CC lib/util/fd_group.o 00:03:01.296 CC lib/util/file.o 00:03:01.296 LIB libspdk_dma.a 00:03:01.296 CC lib/vfio_user/host/vfio_user.o 00:03:01.296 SO libspdk_dma.so.5.0 00:03:01.296 CC lib/util/hexlify.o 00:03:01.296 SYMLINK libspdk_dma.so 00:03:01.296 CC lib/util/iov.o 00:03:01.296 CC lib/util/math.o 00:03:01.296 LIB libspdk_ioat.a 00:03:01.296 CC lib/util/net.o 00:03:01.296 SO libspdk_ioat.so.7.0 00:03:01.296 CC lib/util/pipe.o 00:03:01.296 CC lib/util/strerror_tls.o 00:03:01.296 SYMLINK libspdk_ioat.so 00:03:01.296 CC lib/util/string.o 00:03:01.296 CC lib/util/uuid.o 00:03:01.296 CC lib/util/xor.o 00:03:01.296 LIB libspdk_vfio_user.a 00:03:01.296 CC lib/util/zipf.o 00:03:01.296 SO libspdk_vfio_user.so.5.0 00:03:01.296 CC lib/util/md5.o 00:03:01.296 SYMLINK libspdk_vfio_user.so 00:03:01.555 LIB libspdk_util.a 00:03:01.555 SO libspdk_util.so.10.0 00:03:01.555 LIB libspdk_trace_parser.a 00:03:01.555 SO libspdk_trace_parser.so.6.0 00:03:01.813 SYMLINK libspdk_util.so 00:03:01.813 SYMLINK libspdk_trace_parser.so 00:03:01.813 CC lib/env_dpdk/env.o 00:03:01.813 CC lib/conf/conf.o 00:03:01.813 CC lib/env_dpdk/memory.o 00:03:01.813 CC lib/env_dpdk/pci.o 00:03:01.813 CC lib/env_dpdk/init.o 00:03:01.813 CC lib/idxd/idxd.o 00:03:01.813 CC lib/rdma_provider/common.o 00:03:01.813 CC lib/json/json_parse.o 00:03:01.813 CC lib/rdma_utils/rdma_utils.o 00:03:01.813 CC lib/vmd/vmd.o 00:03:02.071 LIB libspdk_conf.a 00:03:02.071 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:02.071 SO libspdk_conf.so.6.0 00:03:02.071 SYMLINK libspdk_conf.so 00:03:02.071 CC lib/idxd/idxd_user.o 00:03:02.071 CC lib/json/json_util.o 00:03:02.071 CC lib/json/json_write.o 00:03:02.071 LIB libspdk_rdma_utils.a 00:03:02.071 CC lib/env_dpdk/threads.o 00:03:02.071 SO libspdk_rdma_utils.so.1.0 00:03:02.071 LIB libspdk_rdma_provider.a 00:03:02.330 SYMLINK libspdk_rdma_utils.so 00:03:02.330 CC lib/env_dpdk/pci_ioat.o 00:03:02.330 SO libspdk_rdma_provider.so.6.0 00:03:02.330 CC lib/idxd/idxd_kernel.o 00:03:02.330 CC lib/env_dpdk/pci_virtio.o 00:03:02.330 SYMLINK libspdk_rdma_provider.so 00:03:02.330 CC lib/env_dpdk/pci_vmd.o 00:03:02.330 CC lib/env_dpdk/pci_idxd.o 00:03:02.330 CC lib/vmd/led.o 00:03:02.330 LIB libspdk_json.a 00:03:02.330 CC lib/env_dpdk/pci_event.o 00:03:02.330 SO libspdk_json.so.6.0 00:03:02.330 CC lib/env_dpdk/sigbus_handler.o 00:03:02.330 LIB libspdk_idxd.a 00:03:02.330 CC lib/env_dpdk/pci_dpdk.o 00:03:02.330 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:02.330 SO libspdk_idxd.so.12.1 00:03:02.330 SYMLINK libspdk_json.so 00:03:02.330 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:02.330 SYMLINK libspdk_idxd.so 00:03:02.596 CC lib/jsonrpc/jsonrpc_server.o 00:03:02.596 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:02.597 CC lib/jsonrpc/jsonrpc_client.o 00:03:02.597 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:02.597 LIB libspdk_vmd.a 00:03:02.597 SO libspdk_vmd.so.6.0 00:03:02.883 SYMLINK libspdk_vmd.so 00:03:02.883 LIB libspdk_jsonrpc.a 00:03:02.883 SO libspdk_jsonrpc.so.6.0 00:03:02.883 SYMLINK libspdk_jsonrpc.so 00:03:03.140 CC lib/rpc/rpc.o 00:03:03.140 LIB libspdk_env_dpdk.a 00:03:03.398 LIB libspdk_rpc.a 00:03:03.398 SO libspdk_env_dpdk.so.15.0 00:03:03.398 SO libspdk_rpc.so.6.0 00:03:03.398 SYMLINK libspdk_rpc.so 00:03:03.398 SYMLINK libspdk_env_dpdk.so 00:03:03.657 CC lib/trace/trace.o 00:03:03.657 CC lib/trace/trace_flags.o 00:03:03.657 CC lib/trace/trace_rpc.o 00:03:03.657 CC lib/notify/notify.o 00:03:03.657 CC lib/notify/notify_rpc.o 00:03:03.657 CC lib/keyring/keyring.o 00:03:03.657 CC lib/keyring/keyring_rpc.o 00:03:03.657 LIB libspdk_notify.a 00:03:03.657 SO libspdk_notify.so.6.0 00:03:03.657 LIB libspdk_keyring.a 00:03:03.915 LIB libspdk_trace.a 00:03:03.915 SO libspdk_keyring.so.2.0 00:03:03.915 SO libspdk_trace.so.11.0 00:03:03.915 SYMLINK libspdk_notify.so 00:03:03.915 SYMLINK libspdk_keyring.so 00:03:03.915 SYMLINK libspdk_trace.so 00:03:04.173 CC lib/sock/sock.o 00:03:04.173 CC lib/sock/sock_rpc.o 00:03:04.173 CC lib/thread/iobuf.o 00:03:04.173 CC lib/thread/thread.o 00:03:04.431 LIB libspdk_sock.a 00:03:04.431 SO libspdk_sock.so.10.0 00:03:04.431 SYMLINK libspdk_sock.so 00:03:04.689 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:04.689 CC lib/nvme/nvme_ns_cmd.o 00:03:04.689 CC lib/nvme/nvme_ctrlr.o 00:03:04.689 CC lib/nvme/nvme_fabric.o 00:03:04.689 CC lib/nvme/nvme_ns.o 00:03:04.689 CC lib/nvme/nvme_pcie.o 00:03:04.689 CC lib/nvme/nvme_pcie_common.o 00:03:04.689 CC lib/nvme/nvme_qpair.o 00:03:04.689 CC lib/nvme/nvme.o 00:03:05.255 CC lib/nvme/nvme_quirks.o 00:03:05.255 LIB libspdk_thread.a 00:03:05.255 SO libspdk_thread.so.10.2 00:03:05.255 SYMLINK libspdk_thread.so 00:03:05.255 CC lib/nvme/nvme_transport.o 00:03:05.255 CC lib/nvme/nvme_discovery.o 00:03:05.513 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:05.513 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:05.513 CC lib/accel/accel.o 00:03:05.513 CC lib/blob/blobstore.o 00:03:05.772 CC lib/blob/request.o 00:03:05.772 CC lib/init/json_config.o 00:03:05.772 CC lib/accel/accel_rpc.o 00:03:05.772 CC lib/accel/accel_sw.o 00:03:05.772 CC lib/virtio/virtio.o 00:03:05.772 CC lib/virtio/virtio_vhost_user.o 00:03:06.029 CC lib/virtio/virtio_vfio_user.o 00:03:06.029 CC lib/init/subsystem.o 00:03:06.029 CC lib/virtio/virtio_pci.o 00:03:06.029 CC lib/blob/zeroes.o 00:03:06.029 CC lib/init/subsystem_rpc.o 00:03:06.029 CC lib/init/rpc.o 00:03:06.029 CC lib/nvme/nvme_tcp.o 00:03:06.029 CC lib/nvme/nvme_opal.o 00:03:06.029 CC lib/nvme/nvme_io_msg.o 00:03:06.287 CC lib/blob/blob_bs_dev.o 00:03:06.287 CC lib/fsdev/fsdev.o 00:03:06.287 LIB libspdk_virtio.a 00:03:06.287 LIB libspdk_init.a 00:03:06.287 SO libspdk_virtio.so.7.0 00:03:06.287 SO libspdk_init.so.6.0 00:03:06.287 SYMLINK libspdk_init.so 00:03:06.287 SYMLINK libspdk_virtio.so 00:03:06.287 CC lib/nvme/nvme_poll_group.o 00:03:06.287 CC lib/fsdev/fsdev_io.o 00:03:06.287 LIB libspdk_accel.a 00:03:06.544 SO libspdk_accel.so.16.0 00:03:06.544 CC lib/event/app.o 00:03:06.544 SYMLINK libspdk_accel.so 00:03:06.544 CC lib/event/reactor.o 00:03:06.544 CC lib/event/log_rpc.o 00:03:06.544 CC lib/event/app_rpc.o 00:03:06.848 CC lib/fsdev/fsdev_rpc.o 00:03:06.848 CC lib/nvme/nvme_zns.o 00:03:06.848 CC lib/nvme/nvme_stubs.o 00:03:06.848 CC lib/nvme/nvme_auth.o 00:03:06.848 CC lib/nvme/nvme_cuse.o 00:03:06.848 CC lib/nvme/nvme_rdma.o 00:03:06.848 LIB libspdk_fsdev.a 00:03:06.849 CC lib/event/scheduler_static.o 00:03:06.849 SO libspdk_fsdev.so.1.0 00:03:06.849 SYMLINK libspdk_fsdev.so 00:03:07.122 LIB libspdk_event.a 00:03:07.122 SO libspdk_event.so.15.0 00:03:07.122 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:07.122 CC lib/bdev/bdev.o 00:03:07.122 SYMLINK libspdk_event.so 00:03:07.122 CC lib/bdev/bdev_rpc.o 00:03:07.122 CC lib/bdev/bdev_zone.o 00:03:07.122 CC lib/bdev/part.o 00:03:07.378 CC lib/bdev/scsi_nvme.o 00:03:07.635 LIB libspdk_fuse_dispatcher.a 00:03:07.635 SO libspdk_fuse_dispatcher.so.1.0 00:03:07.635 SYMLINK libspdk_fuse_dispatcher.so 00:03:07.893 LIB libspdk_nvme.a 00:03:08.151 SO libspdk_nvme.so.14.0 00:03:08.151 SYMLINK libspdk_nvme.so 00:03:09.086 LIB libspdk_blob.a 00:03:09.086 SO libspdk_blob.so.11.0 00:03:09.086 SYMLINK libspdk_blob.so 00:03:09.346 CC lib/blobfs/blobfs.o 00:03:09.346 CC lib/blobfs/tree.o 00:03:09.346 CC lib/lvol/lvol.o 00:03:09.915 LIB libspdk_bdev.a 00:03:09.915 SO libspdk_bdev.so.17.0 00:03:09.915 LIB libspdk_blobfs.a 00:03:09.915 SO libspdk_blobfs.so.10.0 00:03:09.915 SYMLINK libspdk_bdev.so 00:03:09.915 SYMLINK libspdk_blobfs.so 00:03:10.173 CC lib/nvmf/ctrlr.o 00:03:10.173 CC lib/nvmf/ctrlr_discovery.o 00:03:10.173 CC lib/ublk/ublk.o 00:03:10.173 CC lib/ftl/ftl_core.o 00:03:10.173 CC lib/nvmf/subsystem.o 00:03:10.173 CC lib/ublk/ublk_rpc.o 00:03:10.173 CC lib/nvmf/ctrlr_bdev.o 00:03:10.173 CC lib/scsi/dev.o 00:03:10.173 CC lib/nbd/nbd.o 00:03:10.173 LIB libspdk_lvol.a 00:03:10.173 SO libspdk_lvol.so.10.0 00:03:10.173 SYMLINK libspdk_lvol.so 00:03:10.173 CC lib/nbd/nbd_rpc.o 00:03:10.173 CC lib/ftl/ftl_init.o 00:03:10.173 CC lib/scsi/lun.o 00:03:10.431 CC lib/nvmf/nvmf.o 00:03:10.431 CC lib/ftl/ftl_layout.o 00:03:10.431 CC lib/ftl/ftl_debug.o 00:03:10.431 LIB libspdk_nbd.a 00:03:10.431 SO libspdk_nbd.so.7.0 00:03:10.431 CC lib/scsi/port.o 00:03:10.692 CC lib/nvmf/nvmf_rpc.o 00:03:10.692 LIB libspdk_ublk.a 00:03:10.692 SYMLINK libspdk_nbd.so 00:03:10.692 CC lib/nvmf/transport.o 00:03:10.692 SO libspdk_ublk.so.3.0 00:03:10.692 CC lib/scsi/scsi.o 00:03:10.692 SYMLINK libspdk_ublk.so 00:03:10.692 CC lib/ftl/ftl_io.o 00:03:10.692 CC lib/ftl/ftl_sb.o 00:03:10.692 CC lib/nvmf/tcp.o 00:03:10.692 CC lib/scsi/scsi_bdev.o 00:03:10.692 CC lib/nvmf/stubs.o 00:03:10.952 CC lib/ftl/ftl_l2p.o 00:03:10.952 CC lib/ftl/ftl_l2p_flat.o 00:03:10.952 CC lib/ftl/ftl_nv_cache.o 00:03:10.952 CC lib/ftl/ftl_band.o 00:03:10.952 CC lib/ftl/ftl_band_ops.o 00:03:11.210 CC lib/nvmf/mdns_server.o 00:03:11.210 CC lib/scsi/scsi_pr.o 00:03:11.210 CC lib/scsi/scsi_rpc.o 00:03:11.210 CC lib/scsi/task.o 00:03:11.470 CC lib/ftl/ftl_writer.o 00:03:11.470 CC lib/ftl/ftl_rq.o 00:03:11.470 CC lib/ftl/ftl_reloc.o 00:03:11.470 CC lib/nvmf/rdma.o 00:03:11.470 CC lib/nvmf/auth.o 00:03:11.470 LIB libspdk_scsi.a 00:03:11.470 CC lib/ftl/ftl_l2p_cache.o 00:03:11.470 CC lib/ftl/ftl_p2l.o 00:03:11.470 CC lib/ftl/ftl_p2l_log.o 00:03:11.470 SO libspdk_scsi.so.9.0 00:03:11.743 CC lib/ftl/mngt/ftl_mngt.o 00:03:11.743 SYMLINK libspdk_scsi.so 00:03:11.743 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:11.743 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:11.743 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:11.743 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:11.743 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:11.743 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:11.743 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:12.001 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:12.001 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:12.001 CC lib/iscsi/conn.o 00:03:12.001 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.001 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:12.001 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:12.260 CC lib/vhost/vhost.o 00:03:12.260 CC lib/ftl/utils/ftl_conf.o 00:03:12.260 CC lib/vhost/vhost_rpc.o 00:03:12.260 CC lib/vhost/vhost_scsi.o 00:03:12.260 CC lib/ftl/utils/ftl_md.o 00:03:12.260 CC lib/iscsi/init_grp.o 00:03:12.260 CC lib/iscsi/iscsi.o 00:03:12.525 CC lib/iscsi/param.o 00:03:12.525 CC lib/iscsi/portal_grp.o 00:03:12.525 CC lib/ftl/utils/ftl_mempool.o 00:03:12.525 CC lib/iscsi/tgt_node.o 00:03:12.784 CC lib/iscsi/iscsi_subsystem.o 00:03:12.784 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.784 CC lib/ftl/utils/ftl_property.o 00:03:12.784 CC lib/vhost/vhost_blk.o 00:03:12.784 CC lib/iscsi/iscsi_rpc.o 00:03:12.784 CC lib/iscsi/task.o 00:03:12.784 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:12.784 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:13.043 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:13.043 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:13.043 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:13.043 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:13.043 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:13.043 CC lib/vhost/rte_vhost_user.o 00:03:13.043 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:13.043 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:13.043 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:13.043 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:13.301 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:13.301 LIB libspdk_nvmf.a 00:03:13.301 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:13.301 CC lib/ftl/base/ftl_base_dev.o 00:03:13.301 CC lib/ftl/base/ftl_base_bdev.o 00:03:13.301 SO libspdk_nvmf.so.19.0 00:03:13.301 CC lib/ftl/ftl_trace.o 00:03:13.559 SYMLINK libspdk_nvmf.so 00:03:13.559 LIB libspdk_ftl.a 00:03:13.818 SO libspdk_ftl.so.9.0 00:03:13.818 LIB libspdk_iscsi.a 00:03:13.818 SO libspdk_iscsi.so.8.0 00:03:13.818 LIB libspdk_vhost.a 00:03:13.818 SYMLINK libspdk_ftl.so 00:03:13.818 SYMLINK libspdk_iscsi.so 00:03:13.818 SO libspdk_vhost.so.8.0 00:03:14.076 SYMLINK libspdk_vhost.so 00:03:14.334 CC module/env_dpdk/env_dpdk_rpc.o 00:03:14.334 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:14.334 CC module/fsdev/aio/fsdev_aio.o 00:03:14.334 CC module/accel/error/accel_error.o 00:03:14.334 CC module/keyring/file/keyring.o 00:03:14.334 CC module/scheduler/gscheduler/gscheduler.o 00:03:14.334 CC module/keyring/linux/keyring.o 00:03:14.334 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:14.334 CC module/sock/posix/posix.o 00:03:14.334 CC module/blob/bdev/blob_bdev.o 00:03:14.334 LIB libspdk_env_dpdk_rpc.a 00:03:14.334 SO libspdk_env_dpdk_rpc.so.6.0 00:03:14.334 CC module/keyring/linux/keyring_rpc.o 00:03:14.334 LIB libspdk_scheduler_gscheduler.a 00:03:14.334 CC module/accel/error/accel_error_rpc.o 00:03:14.334 CC module/keyring/file/keyring_rpc.o 00:03:14.334 SO libspdk_scheduler_gscheduler.so.4.0 00:03:14.592 SYMLINK libspdk_env_dpdk_rpc.so 00:03:14.592 LIB libspdk_scheduler_dpdk_governor.a 00:03:14.592 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:14.592 SYMLINK libspdk_scheduler_gscheduler.so 00:03:14.592 LIB libspdk_scheduler_dynamic.a 00:03:14.592 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:14.592 LIB libspdk_keyring_linux.a 00:03:14.592 SO libspdk_scheduler_dynamic.so.4.0 00:03:14.592 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:14.592 CC module/fsdev/aio/linux_aio_mgr.o 00:03:14.592 SO libspdk_keyring_linux.so.1.0 00:03:14.592 LIB libspdk_blob_bdev.a 00:03:14.592 LIB libspdk_keyring_file.a 00:03:14.592 SYMLINK libspdk_scheduler_dynamic.so 00:03:14.592 SO libspdk_blob_bdev.so.11.0 00:03:14.592 SO libspdk_keyring_file.so.2.0 00:03:14.592 LIB libspdk_accel_error.a 00:03:14.592 SYMLINK libspdk_keyring_linux.so 00:03:14.592 CC module/accel/ioat/accel_ioat.o 00:03:14.592 SO libspdk_accel_error.so.2.0 00:03:14.592 SYMLINK libspdk_keyring_file.so 00:03:14.592 SYMLINK libspdk_blob_bdev.so 00:03:14.592 CC module/accel/ioat/accel_ioat_rpc.o 00:03:14.592 SYMLINK libspdk_accel_error.so 00:03:14.592 CC module/accel/dsa/accel_dsa.o 00:03:14.851 CC module/accel/dsa/accel_dsa_rpc.o 00:03:14.851 CC module/accel/iaa/accel_iaa.o 00:03:14.851 LIB libspdk_accel_ioat.a 00:03:14.851 SO libspdk_accel_ioat.so.6.0 00:03:14.851 CC module/bdev/error/vbdev_error.o 00:03:14.851 CC module/bdev/delay/vbdev_delay.o 00:03:14.851 CC module/blobfs/bdev/blobfs_bdev.o 00:03:14.851 CC module/bdev/gpt/gpt.o 00:03:14.851 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.851 SYMLINK libspdk_accel_ioat.so 00:03:14.851 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.851 LIB libspdk_sock_posix.a 00:03:14.851 LIB libspdk_accel_dsa.a 00:03:14.851 SO libspdk_accel_dsa.so.5.0 00:03:14.851 SO libspdk_sock_posix.so.6.0 00:03:14.851 CC module/accel/iaa/accel_iaa_rpc.o 00:03:15.109 LIB libspdk_fsdev_aio.a 00:03:15.109 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:15.109 CC module/bdev/error/vbdev_error_rpc.o 00:03:15.109 SO libspdk_fsdev_aio.so.1.0 00:03:15.109 SYMLINK libspdk_accel_dsa.so 00:03:15.109 SYMLINK libspdk_sock_posix.so 00:03:15.109 LIB libspdk_bdev_gpt.a 00:03:15.109 LIB libspdk_accel_iaa.a 00:03:15.109 SO libspdk_bdev_gpt.so.6.0 00:03:15.109 SYMLINK libspdk_fsdev_aio.so 00:03:15.109 SO libspdk_accel_iaa.so.3.0 00:03:15.109 LIB libspdk_blobfs_bdev.a 00:03:15.109 LIB libspdk_bdev_error.a 00:03:15.109 CC module/bdev/lvol/vbdev_lvol.o 00:03:15.109 SYMLINK libspdk_bdev_gpt.so 00:03:15.109 SO libspdk_blobfs_bdev.so.6.0 00:03:15.109 SO libspdk_bdev_error.so.6.0 00:03:15.109 SYMLINK libspdk_accel_iaa.so 00:03:15.109 LIB libspdk_bdev_delay.a 00:03:15.109 CC module/bdev/null/bdev_null.o 00:03:15.109 CC module/bdev/null/bdev_null_rpc.o 00:03:15.109 CC module/bdev/malloc/bdev_malloc.o 00:03:15.109 SO libspdk_bdev_delay.so.6.0 00:03:15.109 SYMLINK libspdk_bdev_error.so 00:03:15.109 SYMLINK libspdk_blobfs_bdev.so 00:03:15.109 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:15.109 CC module/bdev/passthru/vbdev_passthru.o 00:03:15.109 CC module/bdev/nvme/bdev_nvme.o 00:03:15.366 SYMLINK libspdk_bdev_delay.so 00:03:15.366 CC module/bdev/raid/bdev_raid.o 00:03:15.366 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:15.366 CC module/bdev/split/vbdev_split.o 00:03:15.366 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.366 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.366 LIB libspdk_bdev_null.a 00:03:15.367 LIB libspdk_bdev_passthru.a 00:03:15.367 SO libspdk_bdev_null.so.6.0 00:03:15.367 SO libspdk_bdev_passthru.so.6.0 00:03:15.367 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:15.367 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.624 CC module/bdev/raid/raid0.o 00:03:15.624 SYMLINK libspdk_bdev_null.so 00:03:15.624 SYMLINK libspdk_bdev_passthru.so 00:03:15.624 LIB libspdk_bdev_lvol.a 00:03:15.624 SO libspdk_bdev_lvol.so.6.0 00:03:15.624 LIB libspdk_bdev_malloc.a 00:03:15.624 LIB libspdk_bdev_split.a 00:03:15.624 SO libspdk_bdev_malloc.so.6.0 00:03:15.624 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:15.624 CC module/bdev/raid/raid1.o 00:03:15.624 CC module/bdev/xnvme/bdev_xnvme.o 00:03:15.624 CC module/bdev/aio/bdev_aio.o 00:03:15.624 SO libspdk_bdev_split.so.6.0 00:03:15.624 SYMLINK libspdk_bdev_lvol.so 00:03:15.624 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.624 SYMLINK libspdk_bdev_malloc.so 00:03:15.624 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.624 SYMLINK libspdk_bdev_split.so 00:03:15.624 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:15.920 CC module/bdev/nvme/nvme_rpc.o 00:03:15.920 CC module/bdev/raid/concat.o 00:03:15.920 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:15.920 CC module/bdev/nvme/bdev_mdns_client.o 00:03:15.920 CC module/bdev/ftl/bdev_ftl.o 00:03:15.920 LIB libspdk_bdev_zone_block.a 00:03:15.920 CC module/bdev/nvme/vbdev_opal.o 00:03:15.920 LIB libspdk_bdev_xnvme.a 00:03:15.920 LIB libspdk_bdev_aio.a 00:03:15.920 SO libspdk_bdev_zone_block.so.6.0 00:03:15.920 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:15.920 SO libspdk_bdev_xnvme.so.3.0 00:03:15.920 SO libspdk_bdev_aio.so.6.0 00:03:16.178 SYMLINK libspdk_bdev_xnvme.so 00:03:16.178 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.178 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:16.178 SYMLINK libspdk_bdev_zone_block.so 00:03:16.178 SYMLINK libspdk_bdev_aio.so 00:03:16.178 LIB libspdk_bdev_raid.a 00:03:16.178 SO libspdk_bdev_raid.so.6.0 00:03:16.178 CC module/bdev/iscsi/bdev_iscsi.o 00:03:16.178 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:16.178 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:16.178 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:16.178 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:16.178 SYMLINK libspdk_bdev_raid.so 00:03:16.178 LIB libspdk_bdev_ftl.a 00:03:16.178 SO libspdk_bdev_ftl.so.6.0 00:03:16.435 SYMLINK libspdk_bdev_ftl.so 00:03:16.435 LIB libspdk_bdev_iscsi.a 00:03:16.435 SO libspdk_bdev_iscsi.so.6.0 00:03:16.692 SYMLINK libspdk_bdev_iscsi.so 00:03:16.692 LIB libspdk_bdev_virtio.a 00:03:16.692 SO libspdk_bdev_virtio.so.6.0 00:03:16.692 SYMLINK libspdk_bdev_virtio.so 00:03:17.625 LIB libspdk_bdev_nvme.a 00:03:17.625 SO libspdk_bdev_nvme.so.7.0 00:03:17.625 SYMLINK libspdk_bdev_nvme.so 00:03:17.883 CC module/event/subsystems/iobuf/iobuf.o 00:03:17.883 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:17.883 CC module/event/subsystems/vmd/vmd.o 00:03:17.883 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:17.883 CC module/event/subsystems/fsdev/fsdev.o 00:03:17.883 CC module/event/subsystems/keyring/keyring.o 00:03:17.883 CC module/event/subsystems/sock/sock.o 00:03:17.883 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:17.883 CC module/event/subsystems/scheduler/scheduler.o 00:03:18.141 LIB libspdk_event_keyring.a 00:03:18.141 SO libspdk_event_keyring.so.1.0 00:03:18.141 LIB libspdk_event_iobuf.a 00:03:18.141 LIB libspdk_event_vhost_blk.a 00:03:18.141 LIB libspdk_event_fsdev.a 00:03:18.141 LIB libspdk_event_vmd.a 00:03:18.141 LIB libspdk_event_sock.a 00:03:18.141 SO libspdk_event_vhost_blk.so.3.0 00:03:18.141 SO libspdk_event_iobuf.so.3.0 00:03:18.141 SO libspdk_event_fsdev.so.1.0 00:03:18.141 SO libspdk_event_vmd.so.6.0 00:03:18.141 SYMLINK libspdk_event_keyring.so 00:03:18.141 SO libspdk_event_sock.so.5.0 00:03:18.141 LIB libspdk_event_scheduler.a 00:03:18.141 SYMLINK libspdk_event_fsdev.so 00:03:18.141 SYMLINK libspdk_event_vhost_blk.so 00:03:18.141 SYMLINK libspdk_event_vmd.so 00:03:18.141 SYMLINK libspdk_event_iobuf.so 00:03:18.141 SO libspdk_event_scheduler.so.4.0 00:03:18.141 SYMLINK libspdk_event_sock.so 00:03:18.141 SYMLINK libspdk_event_scheduler.so 00:03:18.397 CC module/event/subsystems/accel/accel.o 00:03:18.397 LIB libspdk_event_accel.a 00:03:18.397 SO libspdk_event_accel.so.6.0 00:03:18.656 SYMLINK libspdk_event_accel.so 00:03:18.914 CC module/event/subsystems/bdev/bdev.o 00:03:18.914 LIB libspdk_event_bdev.a 00:03:18.914 SO libspdk_event_bdev.so.6.0 00:03:18.914 SYMLINK libspdk_event_bdev.so 00:03:19.172 CC module/event/subsystems/nbd/nbd.o 00:03:19.172 CC module/event/subsystems/ublk/ublk.o 00:03:19.172 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:19.172 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:19.172 CC module/event/subsystems/scsi/scsi.o 00:03:19.172 LIB libspdk_event_nbd.a 00:03:19.172 LIB libspdk_event_ublk.a 00:03:19.172 LIB libspdk_event_scsi.a 00:03:19.430 SO libspdk_event_nbd.so.6.0 00:03:19.430 SO libspdk_event_ublk.so.3.0 00:03:19.430 SO libspdk_event_scsi.so.6.0 00:03:19.430 SYMLINK libspdk_event_ublk.so 00:03:19.430 SYMLINK libspdk_event_nbd.so 00:03:19.430 SYMLINK libspdk_event_scsi.so 00:03:19.430 LIB libspdk_event_nvmf.a 00:03:19.430 SO libspdk_event_nvmf.so.6.0 00:03:19.430 SYMLINK libspdk_event_nvmf.so 00:03:19.430 CC module/event/subsystems/iscsi/iscsi.o 00:03:19.430 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:19.688 LIB libspdk_event_vhost_scsi.a 00:03:19.688 LIB libspdk_event_iscsi.a 00:03:19.688 SO libspdk_event_vhost_scsi.so.3.0 00:03:19.688 SO libspdk_event_iscsi.so.6.0 00:03:19.688 SYMLINK libspdk_event_vhost_scsi.so 00:03:19.688 SYMLINK libspdk_event_iscsi.so 00:03:19.972 SO libspdk.so.6.0 00:03:19.972 SYMLINK libspdk.so 00:03:19.972 CC test/rpc_client/rpc_client_test.o 00:03:19.972 CC app/trace_record/trace_record.o 00:03:19.972 TEST_HEADER include/spdk/accel.h 00:03:19.972 TEST_HEADER include/spdk/accel_module.h 00:03:19.972 TEST_HEADER include/spdk/assert.h 00:03:19.972 TEST_HEADER include/spdk/barrier.h 00:03:19.972 TEST_HEADER include/spdk/base64.h 00:03:19.972 TEST_HEADER include/spdk/bdev.h 00:03:19.972 TEST_HEADER include/spdk/bdev_module.h 00:03:19.972 CXX app/trace/trace.o 00:03:19.972 TEST_HEADER include/spdk/bdev_zone.h 00:03:19.972 TEST_HEADER include/spdk/bit_array.h 00:03:19.972 TEST_HEADER include/spdk/bit_pool.h 00:03:19.972 TEST_HEADER include/spdk/blob_bdev.h 00:03:19.972 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:19.972 TEST_HEADER include/spdk/blobfs.h 00:03:19.972 TEST_HEADER include/spdk/blob.h 00:03:19.972 TEST_HEADER include/spdk/conf.h 00:03:19.972 TEST_HEADER include/spdk/config.h 00:03:19.972 TEST_HEADER include/spdk/cpuset.h 00:03:19.972 TEST_HEADER include/spdk/crc16.h 00:03:19.972 TEST_HEADER include/spdk/crc32.h 00:03:19.972 TEST_HEADER include/spdk/crc64.h 00:03:19.972 TEST_HEADER include/spdk/dif.h 00:03:19.972 TEST_HEADER include/spdk/dma.h 00:03:19.972 TEST_HEADER include/spdk/endian.h 00:03:19.972 TEST_HEADER include/spdk/env_dpdk.h 00:03:19.972 TEST_HEADER include/spdk/env.h 00:03:19.972 TEST_HEADER include/spdk/event.h 00:03:19.972 TEST_HEADER include/spdk/fd_group.h 00:03:19.972 TEST_HEADER include/spdk/fd.h 00:03:19.972 TEST_HEADER include/spdk/file.h 00:03:20.236 CC app/nvmf_tgt/nvmf_main.o 00:03:20.236 TEST_HEADER include/spdk/fsdev.h 00:03:20.236 TEST_HEADER include/spdk/fsdev_module.h 00:03:20.236 TEST_HEADER include/spdk/ftl.h 00:03:20.236 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:20.236 TEST_HEADER include/spdk/gpt_spec.h 00:03:20.236 TEST_HEADER include/spdk/hexlify.h 00:03:20.236 TEST_HEADER include/spdk/histogram_data.h 00:03:20.236 TEST_HEADER include/spdk/idxd.h 00:03:20.236 TEST_HEADER include/spdk/idxd_spec.h 00:03:20.236 TEST_HEADER include/spdk/init.h 00:03:20.236 TEST_HEADER include/spdk/ioat.h 00:03:20.236 TEST_HEADER include/spdk/ioat_spec.h 00:03:20.236 CC test/thread/poller_perf/poller_perf.o 00:03:20.236 TEST_HEADER include/spdk/iscsi_spec.h 00:03:20.237 TEST_HEADER include/spdk/json.h 00:03:20.237 TEST_HEADER include/spdk/jsonrpc.h 00:03:20.237 TEST_HEADER include/spdk/keyring.h 00:03:20.237 TEST_HEADER include/spdk/keyring_module.h 00:03:20.237 TEST_HEADER include/spdk/likely.h 00:03:20.237 TEST_HEADER include/spdk/log.h 00:03:20.237 TEST_HEADER include/spdk/lvol.h 00:03:20.237 TEST_HEADER include/spdk/md5.h 00:03:20.237 TEST_HEADER include/spdk/memory.h 00:03:20.237 CC examples/util/zipf/zipf.o 00:03:20.237 TEST_HEADER include/spdk/mmio.h 00:03:20.237 TEST_HEADER include/spdk/nbd.h 00:03:20.237 TEST_HEADER include/spdk/net.h 00:03:20.237 TEST_HEADER include/spdk/notify.h 00:03:20.237 TEST_HEADER include/spdk/nvme.h 00:03:20.237 TEST_HEADER include/spdk/nvme_intel.h 00:03:20.237 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:20.237 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:20.237 CC test/dma/test_dma/test_dma.o 00:03:20.237 TEST_HEADER include/spdk/nvme_spec.h 00:03:20.237 TEST_HEADER include/spdk/nvme_zns.h 00:03:20.237 CC test/app/bdev_svc/bdev_svc.o 00:03:20.237 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:20.237 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:20.237 TEST_HEADER include/spdk/nvmf.h 00:03:20.237 TEST_HEADER include/spdk/nvmf_spec.h 00:03:20.237 TEST_HEADER include/spdk/nvmf_transport.h 00:03:20.237 TEST_HEADER include/spdk/opal.h 00:03:20.237 TEST_HEADER include/spdk/opal_spec.h 00:03:20.237 TEST_HEADER include/spdk/pci_ids.h 00:03:20.237 TEST_HEADER include/spdk/pipe.h 00:03:20.237 TEST_HEADER include/spdk/queue.h 00:03:20.237 TEST_HEADER include/spdk/reduce.h 00:03:20.237 TEST_HEADER include/spdk/rpc.h 00:03:20.237 TEST_HEADER include/spdk/scheduler.h 00:03:20.237 TEST_HEADER include/spdk/scsi.h 00:03:20.237 TEST_HEADER include/spdk/scsi_spec.h 00:03:20.237 CC test/env/mem_callbacks/mem_callbacks.o 00:03:20.237 TEST_HEADER include/spdk/sock.h 00:03:20.237 TEST_HEADER include/spdk/stdinc.h 00:03:20.237 TEST_HEADER include/spdk/string.h 00:03:20.237 TEST_HEADER include/spdk/thread.h 00:03:20.237 TEST_HEADER include/spdk/trace.h 00:03:20.237 TEST_HEADER include/spdk/trace_parser.h 00:03:20.237 TEST_HEADER include/spdk/tree.h 00:03:20.237 TEST_HEADER include/spdk/ublk.h 00:03:20.237 TEST_HEADER include/spdk/util.h 00:03:20.237 TEST_HEADER include/spdk/uuid.h 00:03:20.237 TEST_HEADER include/spdk/version.h 00:03:20.237 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:20.237 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:20.237 TEST_HEADER include/spdk/vhost.h 00:03:20.237 TEST_HEADER include/spdk/vmd.h 00:03:20.237 TEST_HEADER include/spdk/xor.h 00:03:20.237 TEST_HEADER include/spdk/zipf.h 00:03:20.237 CXX test/cpp_headers/accel.o 00:03:20.237 LINK rpc_client_test 00:03:20.237 LINK poller_perf 00:03:20.237 LINK nvmf_tgt 00:03:20.237 LINK zipf 00:03:20.237 LINK spdk_trace_record 00:03:20.237 LINK bdev_svc 00:03:20.237 CXX test/cpp_headers/accel_module.o 00:03:20.494 CXX test/cpp_headers/assert.o 00:03:20.494 LINK spdk_trace 00:03:20.494 CXX test/cpp_headers/barrier.o 00:03:20.494 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:20.494 CC examples/ioat/perf/perf.o 00:03:20.494 CXX test/cpp_headers/base64.o 00:03:20.494 CC examples/ioat/verify/verify.o 00:03:20.494 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.494 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:20.752 CC examples/idxd/perf/perf.o 00:03:20.752 LINK test_dma 00:03:20.752 LINK mem_callbacks 00:03:20.752 CC app/iscsi_tgt/iscsi_tgt.o 00:03:20.752 CXX test/cpp_headers/bdev.o 00:03:20.752 LINK lsvmd 00:03:20.752 LINK ioat_perf 00:03:20.752 LINK verify 00:03:20.752 LINK interrupt_tgt 00:03:20.752 CXX test/cpp_headers/bdev_module.o 00:03:20.752 CC test/env/vtophys/vtophys.o 00:03:20.752 LINK iscsi_tgt 00:03:20.752 CXX test/cpp_headers/bdev_zone.o 00:03:20.752 LINK nvme_fuzz 00:03:21.010 CC examples/vmd/led/led.o 00:03:21.010 CC test/app/histogram_perf/histogram_perf.o 00:03:21.010 LINK idxd_perf 00:03:21.010 LINK vtophys 00:03:21.010 CC test/app/jsoncat/jsoncat.o 00:03:21.010 CXX test/cpp_headers/bit_array.o 00:03:21.010 CC test/event/event_perf/event_perf.o 00:03:21.010 CC test/app/stub/stub.o 00:03:21.010 LINK led 00:03:21.010 LINK histogram_perf 00:03:21.010 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:21.010 LINK jsoncat 00:03:21.269 CXX test/cpp_headers/bit_pool.o 00:03:21.269 CC app/spdk_lspci/spdk_lspci.o 00:03:21.269 CC app/spdk_tgt/spdk_tgt.o 00:03:21.269 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:21.269 LINK event_perf 00:03:21.269 LINK stub 00:03:21.269 CC test/env/memory/memory_ut.o 00:03:21.269 LINK spdk_lspci 00:03:21.269 CXX test/cpp_headers/blob_bdev.o 00:03:21.269 LINK env_dpdk_post_init 00:03:21.269 LINK spdk_tgt 00:03:21.528 CC test/event/reactor/reactor.o 00:03:21.528 CC examples/sock/hello_world/hello_sock.o 00:03:21.528 CC examples/thread/thread/thread_ex.o 00:03:21.528 CC app/spdk_nvme_perf/perf.o 00:03:21.528 CXX test/cpp_headers/blobfs_bdev.o 00:03:21.528 LINK reactor 00:03:21.528 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:21.528 CC test/event/reactor_perf/reactor_perf.o 00:03:21.528 CXX test/cpp_headers/blobfs.o 00:03:21.528 CC test/event/app_repeat/app_repeat.o 00:03:21.786 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:21.786 LINK thread 00:03:21.786 LINK hello_sock 00:03:21.786 LINK reactor_perf 00:03:21.786 CXX test/cpp_headers/blob.o 00:03:21.786 LINK app_repeat 00:03:21.786 CC app/spdk_nvme_identify/identify.o 00:03:21.786 CXX test/cpp_headers/conf.o 00:03:21.786 CC app/spdk_nvme_discover/discovery_aer.o 00:03:22.043 CC test/event/scheduler/scheduler.o 00:03:22.043 CC examples/nvme/hello_world/hello_world.o 00:03:22.043 CC examples/nvme/reconnect/reconnect.o 00:03:22.043 CXX test/cpp_headers/config.o 00:03:22.043 CXX test/cpp_headers/cpuset.o 00:03:22.043 LINK vhost_fuzz 00:03:22.043 LINK scheduler 00:03:22.043 LINK spdk_nvme_discover 00:03:22.301 CXX test/cpp_headers/crc16.o 00:03:22.301 LINK hello_world 00:03:22.301 CXX test/cpp_headers/crc32.o 00:03:22.301 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:22.301 LINK spdk_nvme_perf 00:03:22.301 LINK reconnect 00:03:22.301 CC test/nvme/aer/aer.o 00:03:22.301 LINK memory_ut 00:03:22.301 CXX test/cpp_headers/crc64.o 00:03:22.301 CXX test/cpp_headers/dif.o 00:03:22.559 CC test/accel/dif/dif.o 00:03:22.559 CC test/blobfs/mkfs/mkfs.o 00:03:22.559 CXX test/cpp_headers/dma.o 00:03:22.559 LINK spdk_nvme_identify 00:03:22.559 CC test/env/pci/pci_ut.o 00:03:22.559 CC app/spdk_top/spdk_top.o 00:03:22.559 LINK aer 00:03:22.559 CC test/lvol/esnap/esnap.o 00:03:22.559 CXX test/cpp_headers/endian.o 00:03:22.816 LINK mkfs 00:03:22.816 LINK nvme_manage 00:03:22.816 CC test/nvme/reset/reset.o 00:03:22.816 CC app/vhost/vhost.o 00:03:22.816 CXX test/cpp_headers/env_dpdk.o 00:03:22.816 LINK iscsi_fuzz 00:03:22.816 CC examples/nvme/arbitration/arbitration.o 00:03:23.073 CC app/spdk_dd/spdk_dd.o 00:03:23.073 CXX test/cpp_headers/env.o 00:03:23.073 LINK pci_ut 00:03:23.073 LINK vhost 00:03:23.073 LINK reset 00:03:23.073 CXX test/cpp_headers/event.o 00:03:23.073 CC app/fio/nvme/fio_plugin.o 00:03:23.073 LINK arbitration 00:03:23.073 CXX test/cpp_headers/fd_group.o 00:03:23.329 LINK dif 00:03:23.329 CC test/nvme/sgl/sgl.o 00:03:23.329 CC app/fio/bdev/fio_plugin.o 00:03:23.329 CXX test/cpp_headers/fd.o 00:03:23.329 CXX test/cpp_headers/file.o 00:03:23.329 CC examples/nvme/hotplug/hotplug.o 00:03:23.329 LINK spdk_dd 00:03:23.329 CXX test/cpp_headers/fsdev.o 00:03:23.329 CXX test/cpp_headers/fsdev_module.o 00:03:23.586 LINK sgl 00:03:23.586 CC test/nvme/e2edp/nvme_dp.o 00:03:23.586 LINK spdk_top 00:03:23.586 LINK hotplug 00:03:23.586 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:23.586 CXX test/cpp_headers/ftl.o 00:03:23.586 CC test/bdev/bdevio/bdevio.o 00:03:23.586 LINK spdk_nvme 00:03:23.586 LINK cmb_copy 00:03:23.586 CC test/nvme/overhead/overhead.o 00:03:23.844 CC test/nvme/err_injection/err_injection.o 00:03:23.844 LINK spdk_bdev 00:03:23.844 CC examples/nvme/abort/abort.o 00:03:23.844 CXX test/cpp_headers/fuse_dispatcher.o 00:03:23.844 LINK nvme_dp 00:03:23.844 LINK err_injection 00:03:23.844 CXX test/cpp_headers/gpt_spec.o 00:03:23.844 CC test/nvme/startup/startup.o 00:03:23.844 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:23.844 CC examples/accel/perf/accel_perf.o 00:03:24.101 LINK overhead 00:03:24.101 CC test/nvme/reserve/reserve.o 00:03:24.101 LINK bdevio 00:03:24.101 CXX test/cpp_headers/hexlify.o 00:03:24.101 LINK startup 00:03:24.101 LINK abort 00:03:24.101 CXX test/cpp_headers/histogram_data.o 00:03:24.101 CXX test/cpp_headers/idxd.o 00:03:24.101 CC test/nvme/simple_copy/simple_copy.o 00:03:24.101 CC examples/blob/hello_world/hello_blob.o 00:03:24.101 LINK reserve 00:03:24.101 LINK hello_fsdev 00:03:24.359 CXX test/cpp_headers/idxd_spec.o 00:03:24.359 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:24.359 CXX test/cpp_headers/init.o 00:03:24.359 LINK simple_copy 00:03:24.359 CC examples/blob/cli/blobcli.o 00:03:24.359 CXX test/cpp_headers/ioat.o 00:03:24.359 CXX test/cpp_headers/ioat_spec.o 00:03:24.359 CC test/nvme/connect_stress/connect_stress.o 00:03:24.359 LINK hello_blob 00:03:24.359 LINK pmr_persistence 00:03:24.359 LINK accel_perf 00:03:24.359 CXX test/cpp_headers/iscsi_spec.o 00:03:24.653 CC test/nvme/boot_partition/boot_partition.o 00:03:24.653 CXX test/cpp_headers/json.o 00:03:24.653 CXX test/cpp_headers/jsonrpc.o 00:03:24.653 LINK connect_stress 00:03:24.653 CXX test/cpp_headers/keyring.o 00:03:24.653 CC test/nvme/fused_ordering/fused_ordering.o 00:03:24.653 CC test/nvme/compliance/nvme_compliance.o 00:03:24.653 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:24.653 LINK boot_partition 00:03:24.653 CC test/nvme/fdp/fdp.o 00:03:24.911 CXX test/cpp_headers/keyring_module.o 00:03:24.911 CC test/nvme/cuse/cuse.o 00:03:24.911 LINK fused_ordering 00:03:24.911 LINK doorbell_aers 00:03:24.911 CC examples/bdev/hello_world/hello_bdev.o 00:03:24.911 LINK blobcli 00:03:24.911 CC examples/bdev/bdevperf/bdevperf.o 00:03:24.911 CXX test/cpp_headers/likely.o 00:03:24.911 CXX test/cpp_headers/log.o 00:03:24.911 CXX test/cpp_headers/lvol.o 00:03:24.911 LINK nvme_compliance 00:03:24.911 CXX test/cpp_headers/md5.o 00:03:24.911 LINK fdp 00:03:25.169 CXX test/cpp_headers/memory.o 00:03:25.169 CXX test/cpp_headers/mmio.o 00:03:25.169 LINK hello_bdev 00:03:25.169 CXX test/cpp_headers/nbd.o 00:03:25.169 CXX test/cpp_headers/net.o 00:03:25.169 CXX test/cpp_headers/notify.o 00:03:25.169 CXX test/cpp_headers/nvme.o 00:03:25.169 CXX test/cpp_headers/nvme_intel.o 00:03:25.169 CXX test/cpp_headers/nvme_ocssd.o 00:03:25.169 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.169 CXX test/cpp_headers/nvme_spec.o 00:03:25.169 CXX test/cpp_headers/nvme_zns.o 00:03:25.169 CXX test/cpp_headers/nvmf_cmd.o 00:03:25.169 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:25.427 CXX test/cpp_headers/nvmf.o 00:03:25.427 CXX test/cpp_headers/nvmf_spec.o 00:03:25.427 CXX test/cpp_headers/nvmf_transport.o 00:03:25.427 CXX test/cpp_headers/opal.o 00:03:25.427 CXX test/cpp_headers/opal_spec.o 00:03:25.427 CXX test/cpp_headers/pci_ids.o 00:03:25.427 CXX test/cpp_headers/pipe.o 00:03:25.427 CXX test/cpp_headers/queue.o 00:03:25.427 CXX test/cpp_headers/reduce.o 00:03:25.427 CXX test/cpp_headers/rpc.o 00:03:25.427 CXX test/cpp_headers/scheduler.o 00:03:25.427 CXX test/cpp_headers/scsi.o 00:03:25.427 CXX test/cpp_headers/sock.o 00:03:25.427 CXX test/cpp_headers/scsi_spec.o 00:03:25.685 CXX test/cpp_headers/stdinc.o 00:03:25.685 CXX test/cpp_headers/string.o 00:03:25.685 CXX test/cpp_headers/thread.o 00:03:25.685 CXX test/cpp_headers/trace.o 00:03:25.685 CXX test/cpp_headers/trace_parser.o 00:03:25.685 CXX test/cpp_headers/tree.o 00:03:25.685 CXX test/cpp_headers/ublk.o 00:03:25.685 CXX test/cpp_headers/util.o 00:03:25.685 CXX test/cpp_headers/uuid.o 00:03:25.685 CXX test/cpp_headers/version.o 00:03:25.685 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.685 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.685 CXX test/cpp_headers/vhost.o 00:03:25.685 LINK bdevperf 00:03:25.685 CXX test/cpp_headers/vmd.o 00:03:25.943 CXX test/cpp_headers/xor.o 00:03:25.943 CXX test/cpp_headers/zipf.o 00:03:25.943 LINK cuse 00:03:26.201 CC examples/nvmf/nvmf/nvmf.o 00:03:26.460 LINK nvmf 00:03:27.833 LINK esnap 00:03:27.833 00:03:27.833 real 1m6.186s 00:03:27.833 user 6m17.039s 00:03:27.833 sys 1m5.083s 00:03:27.833 09:09:19 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:27.833 09:09:19 make -- common/autotest_common.sh@10 -- $ set +x 00:03:27.833 ************************************ 00:03:27.833 END TEST make 00:03:27.833 ************************************ 00:03:28.092 09:09:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:28.092 09:09:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:28.092 09:09:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:28.092 09:09:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.092 09:09:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:28.092 09:09:19 -- pm/common@44 -- $ pid=5058 00:03:28.092 09:09:19 -- pm/common@50 -- $ kill -TERM 5058 00:03:28.092 09:09:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.092 09:09:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:28.092 09:09:19 -- pm/common@44 -- $ pid=5060 00:03:28.092 09:09:19 -- pm/common@50 -- $ kill -TERM 5060 00:03:28.092 09:09:19 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:28.092 09:09:19 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:28.092 09:09:19 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:28.092 09:09:19 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:28.092 09:09:19 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:28.092 09:09:19 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:28.092 09:09:19 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:28.092 09:09:19 -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.092 09:09:19 -- scripts/common.sh@336 -- # read -ra ver1 00:03:28.092 09:09:19 -- scripts/common.sh@337 -- # IFS=.-: 00:03:28.092 09:09:19 -- scripts/common.sh@337 -- # read -ra ver2 00:03:28.092 09:09:19 -- scripts/common.sh@338 -- # local 'op=<' 00:03:28.092 09:09:19 -- scripts/common.sh@340 -- # ver1_l=2 00:03:28.092 09:09:19 -- scripts/common.sh@341 -- # ver2_l=1 00:03:28.092 09:09:19 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:28.092 09:09:19 -- scripts/common.sh@344 -- # case "$op" in 00:03:28.092 09:09:19 -- scripts/common.sh@345 -- # : 1 00:03:28.092 09:09:19 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:28.092 09:09:19 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.092 09:09:19 -- scripts/common.sh@365 -- # decimal 1 00:03:28.092 09:09:19 -- scripts/common.sh@353 -- # local d=1 00:03:28.092 09:09:19 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.092 09:09:19 -- scripts/common.sh@355 -- # echo 1 00:03:28.092 09:09:19 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:28.092 09:09:19 -- scripts/common.sh@366 -- # decimal 2 00:03:28.092 09:09:19 -- scripts/common.sh@353 -- # local d=2 00:03:28.092 09:09:19 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.092 09:09:19 -- scripts/common.sh@355 -- # echo 2 00:03:28.092 09:09:19 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:28.092 09:09:19 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:28.092 09:09:19 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:28.092 09:09:19 -- scripts/common.sh@368 -- # return 0 00:03:28.092 09:09:19 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.092 09:09:19 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:28.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.092 --rc genhtml_branch_coverage=1 00:03:28.092 --rc genhtml_function_coverage=1 00:03:28.092 --rc genhtml_legend=1 00:03:28.092 --rc geninfo_all_blocks=1 00:03:28.092 --rc geninfo_unexecuted_blocks=1 00:03:28.092 00:03:28.092 ' 00:03:28.092 09:09:19 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:28.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.092 --rc genhtml_branch_coverage=1 00:03:28.092 --rc genhtml_function_coverage=1 00:03:28.092 --rc genhtml_legend=1 00:03:28.092 --rc geninfo_all_blocks=1 00:03:28.092 --rc geninfo_unexecuted_blocks=1 00:03:28.092 00:03:28.092 ' 00:03:28.092 09:09:19 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:28.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.092 --rc genhtml_branch_coverage=1 00:03:28.092 --rc genhtml_function_coverage=1 00:03:28.092 --rc genhtml_legend=1 00:03:28.092 --rc geninfo_all_blocks=1 00:03:28.092 --rc geninfo_unexecuted_blocks=1 00:03:28.092 00:03:28.092 ' 00:03:28.092 09:09:19 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:28.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.092 --rc genhtml_branch_coverage=1 00:03:28.093 --rc genhtml_function_coverage=1 00:03:28.093 --rc genhtml_legend=1 00:03:28.093 --rc geninfo_all_blocks=1 00:03:28.093 --rc geninfo_unexecuted_blocks=1 00:03:28.093 00:03:28.093 ' 00:03:28.093 09:09:19 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:28.093 09:09:19 -- nvmf/common.sh@7 -- # uname -s 00:03:28.093 09:09:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:28.093 09:09:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:28.093 09:09:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:28.093 09:09:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:28.093 09:09:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:28.093 09:09:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:28.093 09:09:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:28.093 09:09:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:28.093 09:09:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:28.093 09:09:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:28.093 09:09:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3ee1f361-b177-48ce-904d-e6a9a5ba0a2f 00:03:28.093 09:09:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=3ee1f361-b177-48ce-904d-e6a9a5ba0a2f 00:03:28.093 09:09:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:28.093 09:09:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:28.093 09:09:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:28.093 09:09:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:28.093 09:09:19 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:28.093 09:09:19 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:28.093 09:09:19 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:28.093 09:09:19 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:28.093 09:09:19 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:28.093 09:09:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.093 09:09:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.093 09:09:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.093 09:09:19 -- paths/export.sh@5 -- # export PATH 00:03:28.093 09:09:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.093 09:09:19 -- nvmf/common.sh@51 -- # : 0 00:03:28.093 09:09:19 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:28.093 09:09:19 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:28.093 09:09:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:28.093 09:09:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:28.093 09:09:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:28.093 09:09:19 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:28.093 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:28.093 09:09:19 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:28.093 09:09:19 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:28.093 09:09:19 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:28.093 09:09:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:28.093 09:09:19 -- spdk/autotest.sh@32 -- # uname -s 00:03:28.093 09:09:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:28.093 09:09:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:28.093 09:09:19 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.093 09:09:19 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:28.093 09:09:19 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.093 09:09:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:28.093 09:09:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:28.093 09:09:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:28.093 09:09:19 -- spdk/autotest.sh@48 -- # udevadm_pid=54600 00:03:28.093 09:09:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:28.093 09:09:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:28.093 09:09:19 -- pm/common@17 -- # local monitor 00:03:28.093 09:09:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.093 09:09:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.093 09:09:19 -- pm/common@25 -- # sleep 1 00:03:28.093 09:09:19 -- pm/common@21 -- # date +%s 00:03:28.093 09:09:19 -- pm/common@21 -- # date +%s 00:03:28.093 09:09:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728378559 00:03:28.093 09:09:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728378559 00:03:28.093 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728378559_collect-cpu-load.pm.log 00:03:28.093 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728378559_collect-vmstat.pm.log 00:03:29.478 09:09:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:29.478 09:09:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:29.478 09:09:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:29.478 09:09:20 -- common/autotest_common.sh@10 -- # set +x 00:03:29.478 09:09:20 -- spdk/autotest.sh@59 -- # create_test_list 00:03:29.478 09:09:20 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:29.478 09:09:20 -- common/autotest_common.sh@10 -- # set +x 00:03:29.478 09:09:20 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:29.478 09:09:20 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:29.478 09:09:20 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:29.478 09:09:20 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:29.478 09:09:20 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:29.478 09:09:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:29.478 09:09:20 -- common/autotest_common.sh@1455 -- # uname 00:03:29.478 09:09:20 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:29.478 09:09:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:29.478 09:09:20 -- common/autotest_common.sh@1475 -- # uname 00:03:29.478 09:09:20 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:29.478 09:09:20 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:29.478 09:09:20 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:29.478 lcov: LCOV version 1.15 00:03:29.478 09:09:20 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:44.449 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:44.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:59.317 09:09:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:59.317 09:09:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.317 09:09:48 -- common/autotest_common.sh@10 -- # set +x 00:03:59.318 09:09:48 -- spdk/autotest.sh@78 -- # rm -f 00:03:59.318 09:09:48 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.318 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:59.318 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:59.318 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:59.318 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:59.318 09:09:49 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:59.318 09:09:49 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:59.318 09:09:49 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:59.318 09:09:49 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:59.318 09:09:49 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.318 09:09:49 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:59.318 09:09:49 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:59.318 09:09:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.318 09:09:49 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:59.318 09:09:49 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:59.318 09:09:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.318 09:09:49 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:03:59.318 09:09:49 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:03:59.318 09:09:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.318 09:09:49 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:03:59.318 09:09:49 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:03:59.318 09:09:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.318 09:09:49 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:03:59.318 09:09:49 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:03:59.318 09:09:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.318 09:09:49 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:03:59.318 09:09:49 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:03:59.318 09:09:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.318 09:09:49 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:03:59.318 09:09:49 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:03:59.318 09:09:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:59.318 09:09:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.318 09:09:49 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:59.318 09:09:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.318 09:09:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.318 09:09:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:59.318 09:09:49 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:59.318 09:09:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:59.318 No valid GPT data, bailing 00:03:59.318 09:09:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.318 09:09:49 -- scripts/common.sh@394 -- # pt= 00:03:59.318 09:09:49 -- scripts/common.sh@395 -- # return 1 00:03:59.318 09:09:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:59.318 1+0 records in 00:03:59.318 1+0 records out 00:03:59.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103168 s, 102 MB/s 00:03:59.318 09:09:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.318 09:09:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.318 09:09:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:59.318 09:09:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:59.318 09:09:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:59.318 No valid GPT data, bailing 00:03:59.318 09:09:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:59.318 09:09:49 -- scripts/common.sh@394 -- # pt= 00:03:59.318 09:09:49 -- scripts/common.sh@395 -- # return 1 00:03:59.318 09:09:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:59.318 1+0 records in 00:03:59.318 1+0 records out 00:03:59.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380626 s, 275 MB/s 00:03:59.318 09:09:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.318 09:09:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.318 09:09:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:59.318 09:09:49 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:59.318 09:09:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:59.318 No valid GPT data, bailing 00:03:59.318 09:09:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:59.318 09:09:49 -- scripts/common.sh@394 -- # pt= 00:03:59.318 09:09:49 -- scripts/common.sh@395 -- # return 1 00:03:59.318 09:09:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:59.318 1+0 records in 00:03:59.318 1+0 records out 00:03:59.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415459 s, 252 MB/s 00:03:59.318 09:09:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.318 09:09:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.318 09:09:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:03:59.318 09:09:49 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:03:59.318 09:09:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:59.318 No valid GPT data, bailing 00:03:59.318 09:09:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:59.318 09:09:49 -- scripts/common.sh@394 -- # pt= 00:03:59.318 09:09:49 -- scripts/common.sh@395 -- # return 1 00:03:59.318 09:09:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:59.318 1+0 records in 00:03:59.318 1+0 records out 00:03:59.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403676 s, 260 MB/s 00:03:59.318 09:09:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.318 09:09:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.318 09:09:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:03:59.318 09:09:49 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:03:59.318 09:09:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:59.318 No valid GPT data, bailing 00:03:59.318 09:09:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:59.318 09:09:49 -- scripts/common.sh@394 -- # pt= 00:03:59.318 09:09:49 -- scripts/common.sh@395 -- # return 1 00:03:59.318 09:09:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:59.318 1+0 records in 00:03:59.318 1+0 records out 00:03:59.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414453 s, 253 MB/s 00:03:59.318 09:09:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.318 09:09:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.318 09:09:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:59.318 09:09:49 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:59.318 09:09:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:59.318 No valid GPT data, bailing 00:03:59.318 09:09:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:59.318 09:09:49 -- scripts/common.sh@394 -- # pt= 00:03:59.318 09:09:49 -- scripts/common.sh@395 -- # return 1 00:03:59.318 09:09:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:59.318 1+0 records in 00:03:59.318 1+0 records out 00:03:59.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00335963 s, 312 MB/s 00:03:59.318 09:09:49 -- spdk/autotest.sh@105 -- # sync 00:03:59.318 09:09:50 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:59.318 09:09:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:59.318 09:09:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:00.261 09:09:51 -- spdk/autotest.sh@111 -- # uname -s 00:04:00.261 09:09:51 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:00.261 09:09:51 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:00.261 09:09:51 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:00.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.783 Hugepages 00:04:00.783 node hugesize free / total 00:04:01.044 node0 1048576kB 0 / 0 00:04:01.044 node0 2048kB 0 / 0 00:04:01.044 00:04:01.044 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.044 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:01.044 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:01.044 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:01.044 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:01.306 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:01.306 09:09:52 -- spdk/autotest.sh@117 -- # uname -s 00:04:01.306 09:09:52 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:01.306 09:09:52 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:01.306 09:09:52 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.132 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.132 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.132 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.132 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.132 09:09:53 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:03.067 09:09:54 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:03.067 09:09:54 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:03.067 09:09:54 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:03.067 09:09:54 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:03.067 09:09:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:03.067 09:09:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:03.067 09:09:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.324 09:09:54 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.324 09:09:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:03.324 09:09:54 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:03.324 09:09:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:03.324 09:09:54 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.582 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.582 Waiting for block devices as requested 00:04:03.840 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.840 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.840 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.840 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:09.099 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:09.099 09:10:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:09.099 09:10:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:09.099 09:10:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:09.099 09:10:00 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:09.099 09:10:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:09.099 09:10:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:09.099 09:10:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:09.099 09:10:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:09.099 09:10:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:09.099 09:10:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:09.099 09:10:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:09.099 09:10:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:09.099 09:10:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:09.099 09:10:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:09.099 09:10:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:09.099 09:10:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:09.099 09:10:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:09.099 09:10:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:09.099 09:10:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:09.099 09:10:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:09.099 09:10:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:09.099 09:10:00 -- common/autotest_common.sh@1541 -- # continue 00:04:09.099 09:10:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:09.099 09:10:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:09.099 09:10:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:09.099 09:10:00 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:09.099 09:10:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:09.099 09:10:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:09.099 09:10:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:09.099 09:10:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:09.099 09:10:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:09.099 09:10:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:09.099 09:10:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:09.099 09:10:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:09.099 09:10:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:09.099 09:10:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:09.099 09:10:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:09.099 09:10:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:09.099 09:10:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:09.099 09:10:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:09.099 09:10:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:09.099 09:10:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:09.099 09:10:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:09.099 09:10:00 -- common/autotest_common.sh@1541 -- # continue 00:04:09.099 09:10:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:09.099 09:10:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:09.099 09:10:00 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:09.099 09:10:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:09.099 09:10:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:09.099 09:10:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:09.099 09:10:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:09.099 09:10:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:09.100 09:10:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:09.100 09:10:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:09.100 09:10:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:09.100 09:10:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:09.100 09:10:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:09.100 09:10:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:09.100 09:10:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:09.100 09:10:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:09.100 09:10:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:09.100 09:10:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:09.100 09:10:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:09.100 09:10:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:09.100 09:10:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:09.100 09:10:00 -- common/autotest_common.sh@1541 -- # continue 00:04:09.100 09:10:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:09.100 09:10:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:09.100 09:10:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:09.100 09:10:00 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:09.100 09:10:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:09.100 09:10:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:09.100 09:10:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:09.100 09:10:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:09.100 09:10:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:09.100 09:10:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:09.100 09:10:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:09.100 09:10:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:09.100 09:10:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:09.100 09:10:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:09.100 09:10:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:09.100 09:10:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:09.100 09:10:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:09.100 09:10:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:09.100 09:10:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:09.100 09:10:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:09.100 09:10:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:09.100 09:10:00 -- common/autotest_common.sh@1541 -- # continue 00:04:09.100 09:10:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:09.100 09:10:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:09.100 09:10:00 -- common/autotest_common.sh@10 -- # set +x 00:04:09.100 09:10:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:09.100 09:10:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.100 09:10:00 -- common/autotest_common.sh@10 -- # set +x 00:04:09.100 09:10:00 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.927 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.927 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.927 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.927 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.184 09:10:01 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:10.184 09:10:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:10.184 09:10:01 -- common/autotest_common.sh@10 -- # set +x 00:04:10.184 09:10:01 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:10.184 09:10:01 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:10.184 09:10:01 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:10.184 09:10:01 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:10.184 09:10:01 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:10.184 09:10:01 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:10.184 09:10:01 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:10.184 09:10:01 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:10.184 09:10:01 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:10.184 09:10:01 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:10.184 09:10:01 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.184 09:10:01 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:10.184 09:10:01 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:10.184 09:10:01 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:10.184 09:10:01 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:10.184 09:10:01 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:10.184 09:10:01 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:10.184 09:10:01 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:10.184 09:10:01 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:10.184 09:10:01 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:10.184 09:10:01 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:10.184 09:10:01 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:10.184 09:10:01 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:10.184 09:10:01 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:10.184 09:10:01 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:10.184 09:10:01 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:10.184 09:10:01 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:10.184 09:10:01 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:10.184 09:10:01 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:10.184 09:10:01 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:10.184 09:10:01 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:10.184 09:10:01 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:10.184 09:10:01 -- common/autotest_common.sh@1570 -- # return 0 00:04:10.184 09:10:01 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:10.184 09:10:01 -- common/autotest_common.sh@1578 -- # return 0 00:04:10.184 09:10:01 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:10.184 09:10:01 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:10.184 09:10:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:10.184 09:10:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:10.184 09:10:01 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:10.184 09:10:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:10.184 09:10:01 -- common/autotest_common.sh@10 -- # set +x 00:04:10.184 09:10:01 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:10.184 09:10:01 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:10.184 09:10:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.184 09:10:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.184 09:10:01 -- common/autotest_common.sh@10 -- # set +x 00:04:10.184 ************************************ 00:04:10.184 START TEST env 00:04:10.184 ************************************ 00:04:10.184 09:10:01 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:10.184 * Looking for test storage... 00:04:10.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:10.184 09:10:01 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:10.184 09:10:01 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:10.184 09:10:01 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:10.443 09:10:01 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:10.443 09:10:01 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.443 09:10:01 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.443 09:10:01 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.443 09:10:01 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.443 09:10:01 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.443 09:10:01 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.443 09:10:01 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.443 09:10:01 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.443 09:10:01 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.443 09:10:01 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.443 09:10:01 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.443 09:10:01 env -- scripts/common.sh@344 -- # case "$op" in 00:04:10.443 09:10:01 env -- scripts/common.sh@345 -- # : 1 00:04:10.443 09:10:01 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.443 09:10:01 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.443 09:10:01 env -- scripts/common.sh@365 -- # decimal 1 00:04:10.443 09:10:01 env -- scripts/common.sh@353 -- # local d=1 00:04:10.443 09:10:01 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.443 09:10:01 env -- scripts/common.sh@355 -- # echo 1 00:04:10.443 09:10:01 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.443 09:10:01 env -- scripts/common.sh@366 -- # decimal 2 00:04:10.443 09:10:01 env -- scripts/common.sh@353 -- # local d=2 00:04:10.443 09:10:01 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.443 09:10:01 env -- scripts/common.sh@355 -- # echo 2 00:04:10.443 09:10:01 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.443 09:10:01 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.443 09:10:01 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.443 09:10:01 env -- scripts/common.sh@368 -- # return 0 00:04:10.443 09:10:01 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.443 09:10:01 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:10.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.444 --rc genhtml_branch_coverage=1 00:04:10.444 --rc genhtml_function_coverage=1 00:04:10.444 --rc genhtml_legend=1 00:04:10.444 --rc geninfo_all_blocks=1 00:04:10.444 --rc geninfo_unexecuted_blocks=1 00:04:10.444 00:04:10.444 ' 00:04:10.444 09:10:01 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:10.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.444 --rc genhtml_branch_coverage=1 00:04:10.444 --rc genhtml_function_coverage=1 00:04:10.444 --rc genhtml_legend=1 00:04:10.444 --rc geninfo_all_blocks=1 00:04:10.444 --rc geninfo_unexecuted_blocks=1 00:04:10.444 00:04:10.444 ' 00:04:10.444 09:10:01 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:10.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.444 --rc genhtml_branch_coverage=1 00:04:10.444 --rc genhtml_function_coverage=1 00:04:10.444 --rc genhtml_legend=1 00:04:10.444 --rc geninfo_all_blocks=1 00:04:10.444 --rc geninfo_unexecuted_blocks=1 00:04:10.444 00:04:10.444 ' 00:04:10.444 09:10:01 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:10.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.444 --rc genhtml_branch_coverage=1 00:04:10.444 --rc genhtml_function_coverage=1 00:04:10.444 --rc genhtml_legend=1 00:04:10.444 --rc geninfo_all_blocks=1 00:04:10.444 --rc geninfo_unexecuted_blocks=1 00:04:10.444 00:04:10.444 ' 00:04:10.444 09:10:01 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:10.444 09:10:01 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.444 09:10:01 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.444 09:10:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.444 ************************************ 00:04:10.444 START TEST env_memory 00:04:10.444 ************************************ 00:04:10.444 09:10:01 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:10.444 00:04:10.444 00:04:10.444 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.444 http://cunit.sourceforge.net/ 00:04:10.444 00:04:10.444 00:04:10.444 Suite: memory 00:04:10.444 Test: alloc and free memory map ...[2024-10-08 09:10:01.939138] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:10.444 passed 00:04:10.444 Test: mem map translation ...[2024-10-08 09:10:01.969187] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:10.444 [2024-10-08 09:10:01.969339] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:10.444 [2024-10-08 09:10:01.969440] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:10.444 [2024-10-08 09:10:01.969504] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:10.444 passed 00:04:10.444 Test: mem map registration ...[2024-10-08 09:10:02.021873] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:10.444 [2024-10-08 09:10:02.022021] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:10.444 passed 00:04:10.444 Test: mem map adjacent registrations ...passed 00:04:10.444 00:04:10.444 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.444 suites 1 1 n/a 0 0 00:04:10.444 tests 4 4 4 0 0 00:04:10.444 asserts 152 152 152 0 n/a 00:04:10.444 00:04:10.444 Elapsed time = 0.179 seconds 00:04:10.444 00:04:10.444 real 0m0.210s 00:04:10.444 user 0m0.186s 00:04:10.444 sys 0m0.018s 00:04:10.444 09:10:02 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.444 09:10:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:10.444 ************************************ 00:04:10.444 END TEST env_memory 00:04:10.444 ************************************ 00:04:10.702 09:10:02 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:10.702 09:10:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.702 09:10:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.702 09:10:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.702 ************************************ 00:04:10.702 START TEST env_vtophys 00:04:10.702 ************************************ 00:04:10.702 09:10:02 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:10.702 EAL: lib.eal log level changed from notice to debug 00:04:10.702 EAL: Detected lcore 0 as core 0 on socket 0 00:04:10.702 EAL: Detected lcore 1 as core 0 on socket 0 00:04:10.702 EAL: Detected lcore 2 as core 0 on socket 0 00:04:10.702 EAL: Detected lcore 3 as core 0 on socket 0 00:04:10.702 EAL: Detected lcore 4 as core 0 on socket 0 00:04:10.702 EAL: Detected lcore 5 as core 0 on socket 0 00:04:10.702 EAL: Detected lcore 6 as core 0 on socket 0 00:04:10.702 EAL: Detected lcore 7 as core 0 on socket 0 00:04:10.702 EAL: Detected lcore 8 as core 0 on socket 0 00:04:10.702 EAL: Detected lcore 9 as core 0 on socket 0 00:04:10.702 EAL: Maximum logical cores by configuration: 128 00:04:10.702 EAL: Detected CPU lcores: 10 00:04:10.702 EAL: Detected NUMA nodes: 1 00:04:10.702 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:10.702 EAL: Detected shared linkage of DPDK 00:04:10.702 EAL: No shared files mode enabled, IPC will be disabled 00:04:10.702 EAL: Selected IOVA mode 'PA' 00:04:10.702 EAL: Probing VFIO support... 00:04:10.702 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:10.702 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:10.702 EAL: Ask a virtual area of 0x2e000 bytes 00:04:10.702 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:10.702 EAL: Setting up physically contiguous memory... 00:04:10.702 EAL: Setting maximum number of open files to 524288 00:04:10.702 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:10.702 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:10.702 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.702 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:10.702 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.702 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.702 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:10.702 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:10.702 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.702 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:10.702 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.702 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.702 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:10.702 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:10.702 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.702 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:10.702 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.702 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.702 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:10.702 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:10.702 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.702 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:10.702 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.702 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.702 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:10.702 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:10.702 EAL: Hugepages will be freed exactly as allocated. 00:04:10.702 EAL: No shared files mode enabled, IPC is disabled 00:04:10.702 EAL: No shared files mode enabled, IPC is disabled 00:04:10.702 EAL: TSC frequency is ~2600000 KHz 00:04:10.702 EAL: Main lcore 0 is ready (tid=7f14332c9a40;cpuset=[0]) 00:04:10.702 EAL: Trying to obtain current memory policy. 00:04:10.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.702 EAL: Restoring previous memory policy: 0 00:04:10.702 EAL: request: mp_malloc_sync 00:04:10.702 EAL: No shared files mode enabled, IPC is disabled 00:04:10.702 EAL: Heap on socket 0 was expanded by 2MB 00:04:10.702 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:10.702 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:10.702 EAL: Mem event callback 'spdk:(nil)' registered 00:04:10.702 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:10.702 00:04:10.702 00:04:10.702 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.702 http://cunit.sourceforge.net/ 00:04:10.702 00:04:10.702 00:04:10.702 Suite: components_suite 00:04:10.960 Test: vtophys_malloc_test ...passed 00:04:10.960 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:10.960 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.960 EAL: Restoring previous memory policy: 4 00:04:10.960 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.960 EAL: request: mp_malloc_sync 00:04:10.960 EAL: No shared files mode enabled, IPC is disabled 00:04:10.960 EAL: Heap on socket 0 was expanded by 4MB 00:04:10.960 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.960 EAL: request: mp_malloc_sync 00:04:10.960 EAL: No shared files mode enabled, IPC is disabled 00:04:10.960 EAL: Heap on socket 0 was shrunk by 4MB 00:04:10.961 EAL: Trying to obtain current memory policy. 00:04:10.961 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.961 EAL: Restoring previous memory policy: 4 00:04:10.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.961 EAL: request: mp_malloc_sync 00:04:10.961 EAL: No shared files mode enabled, IPC is disabled 00:04:10.961 EAL: Heap on socket 0 was expanded by 6MB 00:04:10.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.961 EAL: request: mp_malloc_sync 00:04:10.961 EAL: No shared files mode enabled, IPC is disabled 00:04:10.961 EAL: Heap on socket 0 was shrunk by 6MB 00:04:10.961 EAL: Trying to obtain current memory policy. 00:04:10.961 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.961 EAL: Restoring previous memory policy: 4 00:04:10.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.961 EAL: request: mp_malloc_sync 00:04:10.961 EAL: No shared files mode enabled, IPC is disabled 00:04:10.961 EAL: Heap on socket 0 was expanded by 10MB 00:04:10.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.961 EAL: request: mp_malloc_sync 00:04:10.961 EAL: No shared files mode enabled, IPC is disabled 00:04:10.961 EAL: Heap on socket 0 was shrunk by 10MB 00:04:10.961 EAL: Trying to obtain current memory policy. 00:04:10.961 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.961 EAL: Restoring previous memory policy: 4 00:04:10.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.961 EAL: request: mp_malloc_sync 00:04:10.961 EAL: No shared files mode enabled, IPC is disabled 00:04:10.961 EAL: Heap on socket 0 was expanded by 18MB 00:04:10.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.961 EAL: request: mp_malloc_sync 00:04:10.961 EAL: No shared files mode enabled, IPC is disabled 00:04:10.961 EAL: Heap on socket 0 was shrunk by 18MB 00:04:11.222 EAL: Trying to obtain current memory policy. 00:04:11.222 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.222 EAL: Restoring previous memory policy: 4 00:04:11.222 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.222 EAL: request: mp_malloc_sync 00:04:11.222 EAL: No shared files mode enabled, IPC is disabled 00:04:11.222 EAL: Heap on socket 0 was expanded by 34MB 00:04:11.222 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.222 EAL: request: mp_malloc_sync 00:04:11.222 EAL: No shared files mode enabled, IPC is disabled 00:04:11.222 EAL: Heap on socket 0 was shrunk by 34MB 00:04:11.222 EAL: Trying to obtain current memory policy. 00:04:11.222 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.222 EAL: Restoring previous memory policy: 4 00:04:11.222 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.222 EAL: request: mp_malloc_sync 00:04:11.222 EAL: No shared files mode enabled, IPC is disabled 00:04:11.222 EAL: Heap on socket 0 was expanded by 66MB 00:04:11.222 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.222 EAL: request: mp_malloc_sync 00:04:11.222 EAL: No shared files mode enabled, IPC is disabled 00:04:11.222 EAL: Heap on socket 0 was shrunk by 66MB 00:04:11.222 EAL: Trying to obtain current memory policy. 00:04:11.222 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.222 EAL: Restoring previous memory policy: 4 00:04:11.222 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.222 EAL: request: mp_malloc_sync 00:04:11.222 EAL: No shared files mode enabled, IPC is disabled 00:04:11.222 EAL: Heap on socket 0 was expanded by 130MB 00:04:11.506 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.506 EAL: request: mp_malloc_sync 00:04:11.506 EAL: No shared files mode enabled, IPC is disabled 00:04:11.506 EAL: Heap on socket 0 was shrunk by 130MB 00:04:11.506 EAL: Trying to obtain current memory policy. 00:04:11.506 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.506 EAL: Restoring previous memory policy: 4 00:04:11.506 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.506 EAL: request: mp_malloc_sync 00:04:11.506 EAL: No shared files mode enabled, IPC is disabled 00:04:11.506 EAL: Heap on socket 0 was expanded by 258MB 00:04:11.768 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.768 EAL: request: mp_malloc_sync 00:04:11.768 EAL: No shared files mode enabled, IPC is disabled 00:04:11.768 EAL: Heap on socket 0 was shrunk by 258MB 00:04:12.028 EAL: Trying to obtain current memory policy. 00:04:12.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.028 EAL: Restoring previous memory policy: 4 00:04:12.028 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.028 EAL: request: mp_malloc_sync 00:04:12.028 EAL: No shared files mode enabled, IPC is disabled 00:04:12.028 EAL: Heap on socket 0 was expanded by 514MB 00:04:12.600 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.600 EAL: request: mp_malloc_sync 00:04:12.600 EAL: No shared files mode enabled, IPC is disabled 00:04:12.600 EAL: Heap on socket 0 was shrunk by 514MB 00:04:13.169 EAL: Trying to obtain current memory policy. 00:04:13.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.169 EAL: Restoring previous memory policy: 4 00:04:13.169 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.169 EAL: request: mp_malloc_sync 00:04:13.169 EAL: No shared files mode enabled, IPC is disabled 00:04:13.169 EAL: Heap on socket 0 was expanded by 1026MB 00:04:14.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.112 EAL: request: mp_malloc_sync 00:04:14.112 EAL: No shared files mode enabled, IPC is disabled 00:04:14.112 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:15.053 passed 00:04:15.053 00:04:15.053 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.053 suites 1 1 n/a 0 0 00:04:15.053 tests 2 2 2 0 0 00:04:15.053 asserts 5614 5614 5614 0 n/a 00:04:15.053 00:04:15.053 Elapsed time = 4.105 seconds 00:04:15.053 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.053 EAL: request: mp_malloc_sync 00:04:15.053 EAL: No shared files mode enabled, IPC is disabled 00:04:15.053 EAL: Heap on socket 0 was shrunk by 2MB 00:04:15.053 EAL: No shared files mode enabled, IPC is disabled 00:04:15.053 EAL: No shared files mode enabled, IPC is disabled 00:04:15.053 EAL: No shared files mode enabled, IPC is disabled 00:04:15.053 00:04:15.053 real 0m4.350s 00:04:15.053 user 0m3.608s 00:04:15.053 sys 0m0.599s 00:04:15.053 09:10:06 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.053 09:10:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:15.053 ************************************ 00:04:15.053 END TEST env_vtophys 00:04:15.053 ************************************ 00:04:15.053 09:10:06 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:15.053 09:10:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.053 09:10:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.053 09:10:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.053 ************************************ 00:04:15.053 START TEST env_pci 00:04:15.053 ************************************ 00:04:15.053 09:10:06 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:15.053 00:04:15.053 00:04:15.053 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.053 http://cunit.sourceforge.net/ 00:04:15.053 00:04:15.053 00:04:15.053 Suite: pci 00:04:15.053 Test: pci_hook ...[2024-10-08 09:10:06.567081] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57342 has claimed it 00:04:15.053 EAL: Cannot find device (10000:00:01.0) 00:04:15.053 passed 00:04:15.053 00:04:15.053 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.053 suites 1 1 n/a 0 0 00:04:15.053 tests 1 1 1 0 0 00:04:15.053 asserts 25 25 25 0 n/a 00:04:15.053 00:04:15.053 Elapsed time = 0.004 seconds 00:04:15.053 EAL: Failed to attach device on primary process 00:04:15.053 00:04:15.053 real 0m0.059s 00:04:15.053 user 0m0.023s 00:04:15.053 sys 0m0.035s 00:04:15.053 09:10:06 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.053 09:10:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:15.053 ************************************ 00:04:15.053 END TEST env_pci 00:04:15.053 ************************************ 00:04:15.053 09:10:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:15.053 09:10:06 env -- env/env.sh@15 -- # uname 00:04:15.053 09:10:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:15.053 09:10:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:15.053 09:10:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:15.053 09:10:06 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:15.053 09:10:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.053 09:10:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.053 ************************************ 00:04:15.053 START TEST env_dpdk_post_init 00:04:15.053 ************************************ 00:04:15.053 09:10:06 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:15.053 EAL: Detected CPU lcores: 10 00:04:15.053 EAL: Detected NUMA nodes: 1 00:04:15.053 EAL: Detected shared linkage of DPDK 00:04:15.053 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.053 EAL: Selected IOVA mode 'PA' 00:04:15.312 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.312 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:15.312 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:15.312 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:15.312 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:15.312 Starting DPDK initialization... 00:04:15.312 Starting SPDK post initialization... 00:04:15.312 SPDK NVMe probe 00:04:15.312 Attaching to 0000:00:10.0 00:04:15.312 Attaching to 0000:00:11.0 00:04:15.312 Attaching to 0000:00:12.0 00:04:15.312 Attaching to 0000:00:13.0 00:04:15.312 Attached to 0000:00:10.0 00:04:15.312 Attached to 0000:00:11.0 00:04:15.312 Attached to 0000:00:13.0 00:04:15.312 Attached to 0000:00:12.0 00:04:15.312 Cleaning up... 00:04:15.312 00:04:15.312 real 0m0.214s 00:04:15.312 user 0m0.053s 00:04:15.312 sys 0m0.062s 00:04:15.312 ************************************ 00:04:15.312 END TEST env_dpdk_post_init 00:04:15.312 ************************************ 00:04:15.312 09:10:06 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.312 09:10:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:15.312 09:10:06 env -- env/env.sh@26 -- # uname 00:04:15.312 09:10:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:15.312 09:10:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:15.312 09:10:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.312 09:10:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.312 09:10:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.312 ************************************ 00:04:15.312 START TEST env_mem_callbacks 00:04:15.312 ************************************ 00:04:15.312 09:10:06 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:15.312 EAL: Detected CPU lcores: 10 00:04:15.312 EAL: Detected NUMA nodes: 1 00:04:15.312 EAL: Detected shared linkage of DPDK 00:04:15.312 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.312 EAL: Selected IOVA mode 'PA' 00:04:15.571 00:04:15.571 00:04:15.571 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.571 http://cunit.sourceforge.net/ 00:04:15.571 00:04:15.571 00:04:15.571 Suite: memory 00:04:15.571 Test: test ... 00:04:15.571 register 0x200000200000 2097152 00:04:15.571 malloc 3145728 00:04:15.571 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.571 register 0x200000400000 4194304 00:04:15.571 buf 0x2000004fffc0 len 3145728 PASSED 00:04:15.571 malloc 64 00:04:15.571 buf 0x2000004ffec0 len 64 PASSED 00:04:15.571 malloc 4194304 00:04:15.571 register 0x200000800000 6291456 00:04:15.571 buf 0x2000009fffc0 len 4194304 PASSED 00:04:15.571 free 0x2000004fffc0 3145728 00:04:15.571 free 0x2000004ffec0 64 00:04:15.571 unregister 0x200000400000 4194304 PASSED 00:04:15.571 free 0x2000009fffc0 4194304 00:04:15.571 unregister 0x200000800000 6291456 PASSED 00:04:15.571 malloc 8388608 00:04:15.571 register 0x200000400000 10485760 00:04:15.571 buf 0x2000005fffc0 len 8388608 PASSED 00:04:15.571 free 0x2000005fffc0 8388608 00:04:15.571 unregister 0x200000400000 10485760 PASSED 00:04:15.571 passed 00:04:15.571 00:04:15.571 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.571 suites 1 1 n/a 0 0 00:04:15.571 tests 1 1 1 0 0 00:04:15.571 asserts 15 15 15 0 n/a 00:04:15.571 00:04:15.571 Elapsed time = 0.041 seconds 00:04:15.571 00:04:15.571 real 0m0.209s 00:04:15.571 user 0m0.060s 00:04:15.571 sys 0m0.044s 00:04:15.571 09:10:07 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.571 09:10:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:15.571 ************************************ 00:04:15.571 END TEST env_mem_callbacks 00:04:15.571 ************************************ 00:04:15.571 00:04:15.571 real 0m5.400s 00:04:15.571 user 0m4.081s 00:04:15.571 sys 0m0.963s 00:04:15.571 09:10:07 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.572 09:10:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.572 ************************************ 00:04:15.572 END TEST env 00:04:15.572 ************************************ 00:04:15.572 09:10:07 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.572 09:10:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.572 09:10:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.572 09:10:07 -- common/autotest_common.sh@10 -- # set +x 00:04:15.572 ************************************ 00:04:15.572 START TEST rpc 00:04:15.572 ************************************ 00:04:15.572 09:10:07 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.572 * Looking for test storage... 00:04:15.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.572 09:10:07 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:15.572 09:10:07 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:15.572 09:10:07 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:15.830 09:10:07 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:15.830 09:10:07 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.830 09:10:07 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.830 09:10:07 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.830 09:10:07 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.830 09:10:07 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.830 09:10:07 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.830 09:10:07 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.830 09:10:07 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.830 09:10:07 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.830 09:10:07 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.830 09:10:07 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.830 09:10:07 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.830 09:10:07 rpc -- scripts/common.sh@345 -- # : 1 00:04:15.830 09:10:07 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.831 09:10:07 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.831 09:10:07 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.831 09:10:07 rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.831 09:10:07 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.831 09:10:07 rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.831 09:10:07 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.831 09:10:07 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.831 09:10:07 rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.831 09:10:07 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.831 09:10:07 rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.831 09:10:07 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.831 09:10:07 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.831 09:10:07 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.831 09:10:07 rpc -- scripts/common.sh@368 -- # return 0 00:04:15.831 09:10:07 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.831 09:10:07 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.831 --rc genhtml_branch_coverage=1 00:04:15.831 --rc genhtml_function_coverage=1 00:04:15.831 --rc genhtml_legend=1 00:04:15.831 --rc geninfo_all_blocks=1 00:04:15.831 --rc geninfo_unexecuted_blocks=1 00:04:15.831 00:04:15.831 ' 00:04:15.831 09:10:07 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.831 --rc genhtml_branch_coverage=1 00:04:15.831 --rc genhtml_function_coverage=1 00:04:15.831 --rc genhtml_legend=1 00:04:15.831 --rc geninfo_all_blocks=1 00:04:15.831 --rc geninfo_unexecuted_blocks=1 00:04:15.831 00:04:15.831 ' 00:04:15.831 09:10:07 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.831 --rc genhtml_branch_coverage=1 00:04:15.831 --rc genhtml_function_coverage=1 00:04:15.831 --rc genhtml_legend=1 00:04:15.831 --rc geninfo_all_blocks=1 00:04:15.831 --rc geninfo_unexecuted_blocks=1 00:04:15.831 00:04:15.831 ' 00:04:15.831 09:10:07 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.831 --rc genhtml_branch_coverage=1 00:04:15.831 --rc genhtml_function_coverage=1 00:04:15.831 --rc genhtml_legend=1 00:04:15.831 --rc geninfo_all_blocks=1 00:04:15.831 --rc geninfo_unexecuted_blocks=1 00:04:15.831 00:04:15.831 ' 00:04:15.831 09:10:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57469 00:04:15.831 09:10:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.831 09:10:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57469 00:04:15.831 09:10:07 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:15.831 09:10:07 rpc -- common/autotest_common.sh@831 -- # '[' -z 57469 ']' 00:04:15.831 09:10:07 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.831 09:10:07 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:15.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.831 09:10:07 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.831 09:10:07 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:15.831 09:10:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.831 [2024-10-08 09:10:07.398494] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:15.831 [2024-10-08 09:10:07.398620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57469 ] 00:04:16.089 [2024-10-08 09:10:07.545384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.089 [2024-10-08 09:10:07.698412] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:16.089 [2024-10-08 09:10:07.698468] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57469' to capture a snapshot of events at runtime. 00:04:16.089 [2024-10-08 09:10:07.698491] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:16.089 [2024-10-08 09:10:07.698500] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:16.089 [2024-10-08 09:10:07.698506] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57469 for offline analysis/debug. 00:04:16.089 [2024-10-08 09:10:07.699193] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.656 09:10:08 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:16.656 09:10:08 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:16.656 09:10:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.656 09:10:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.656 09:10:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:16.656 09:10:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:16.656 09:10:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.656 09:10:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.656 09:10:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.656 ************************************ 00:04:16.656 START TEST rpc_integrity 00:04:16.656 ************************************ 00:04:16.656 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:16.656 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.656 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.656 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.656 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.656 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.656 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.656 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.656 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.656 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.656 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.656 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.656 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:16.656 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.656 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.656 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.656 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.656 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.656 { 00:04:16.656 "name": "Malloc0", 00:04:16.656 "aliases": [ 00:04:16.656 "e7d7d2e4-d8dd-442e-a521-3c31d269d560" 00:04:16.656 ], 00:04:16.656 "product_name": "Malloc disk", 00:04:16.656 "block_size": 512, 00:04:16.656 "num_blocks": 16384, 00:04:16.656 "uuid": "e7d7d2e4-d8dd-442e-a521-3c31d269d560", 00:04:16.656 "assigned_rate_limits": { 00:04:16.656 "rw_ios_per_sec": 0, 00:04:16.656 "rw_mbytes_per_sec": 0, 00:04:16.656 "r_mbytes_per_sec": 0, 00:04:16.656 "w_mbytes_per_sec": 0 00:04:16.656 }, 00:04:16.656 "claimed": false, 00:04:16.656 "zoned": false, 00:04:16.656 "supported_io_types": { 00:04:16.656 "read": true, 00:04:16.656 "write": true, 00:04:16.656 "unmap": true, 00:04:16.656 "flush": true, 00:04:16.656 "reset": true, 00:04:16.656 "nvme_admin": false, 00:04:16.656 "nvme_io": false, 00:04:16.656 "nvme_io_md": false, 00:04:16.656 "write_zeroes": true, 00:04:16.656 "zcopy": true, 00:04:16.656 "get_zone_info": false, 00:04:16.656 "zone_management": false, 00:04:16.656 "zone_append": false, 00:04:16.656 "compare": false, 00:04:16.656 "compare_and_write": false, 00:04:16.656 "abort": true, 00:04:16.656 "seek_hole": false, 00:04:16.656 "seek_data": false, 00:04:16.656 "copy": true, 00:04:16.656 "nvme_iov_md": false 00:04:16.656 }, 00:04:16.656 "memory_domains": [ 00:04:16.656 { 00:04:16.656 "dma_device_id": "system", 00:04:16.656 "dma_device_type": 1 00:04:16.656 }, 00:04:16.656 { 00:04:16.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.656 "dma_device_type": 2 00:04:16.656 } 00:04:16.656 ], 00:04:16.656 "driver_specific": {} 00:04:16.656 } 00:04:16.656 ]' 00:04:16.656 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.914 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.914 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:16.914 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.914 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.914 [2024-10-08 09:10:08.346222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:16.914 [2024-10-08 09:10:08.346299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.914 [2024-10-08 09:10:08.346320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:16.914 [2024-10-08 09:10:08.346329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.914 [2024-10-08 09:10:08.348221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.914 [2024-10-08 09:10:08.348264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.914 Passthru0 00:04:16.914 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.914 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.914 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.914 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.914 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.914 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.914 { 00:04:16.914 "name": "Malloc0", 00:04:16.914 "aliases": [ 00:04:16.914 "e7d7d2e4-d8dd-442e-a521-3c31d269d560" 00:04:16.914 ], 00:04:16.914 "product_name": "Malloc disk", 00:04:16.914 "block_size": 512, 00:04:16.914 "num_blocks": 16384, 00:04:16.914 "uuid": "e7d7d2e4-d8dd-442e-a521-3c31d269d560", 00:04:16.914 "assigned_rate_limits": { 00:04:16.914 "rw_ios_per_sec": 0, 00:04:16.914 "rw_mbytes_per_sec": 0, 00:04:16.914 "r_mbytes_per_sec": 0, 00:04:16.914 "w_mbytes_per_sec": 0 00:04:16.914 }, 00:04:16.914 "claimed": true, 00:04:16.914 "claim_type": "exclusive_write", 00:04:16.914 "zoned": false, 00:04:16.914 "supported_io_types": { 00:04:16.914 "read": true, 00:04:16.914 "write": true, 00:04:16.914 "unmap": true, 00:04:16.914 "flush": true, 00:04:16.914 "reset": true, 00:04:16.914 "nvme_admin": false, 00:04:16.914 "nvme_io": false, 00:04:16.914 "nvme_io_md": false, 00:04:16.914 "write_zeroes": true, 00:04:16.914 "zcopy": true, 00:04:16.914 "get_zone_info": false, 00:04:16.914 "zone_management": false, 00:04:16.914 "zone_append": false, 00:04:16.914 "compare": false, 00:04:16.914 "compare_and_write": false, 00:04:16.914 "abort": true, 00:04:16.914 "seek_hole": false, 00:04:16.914 "seek_data": false, 00:04:16.914 "copy": true, 00:04:16.914 "nvme_iov_md": false 00:04:16.914 }, 00:04:16.914 "memory_domains": [ 00:04:16.914 { 00:04:16.914 "dma_device_id": "system", 00:04:16.914 "dma_device_type": 1 00:04:16.914 }, 00:04:16.914 { 00:04:16.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.914 "dma_device_type": 2 00:04:16.914 } 00:04:16.914 ], 00:04:16.914 "driver_specific": {} 00:04:16.914 }, 00:04:16.914 { 00:04:16.914 "name": "Passthru0", 00:04:16.914 "aliases": [ 00:04:16.914 "7f3b0be6-64c3-5169-a6c8-f521449d2503" 00:04:16.914 ], 00:04:16.914 "product_name": "passthru", 00:04:16.914 "block_size": 512, 00:04:16.914 "num_blocks": 16384, 00:04:16.914 "uuid": "7f3b0be6-64c3-5169-a6c8-f521449d2503", 00:04:16.914 "assigned_rate_limits": { 00:04:16.914 "rw_ios_per_sec": 0, 00:04:16.914 "rw_mbytes_per_sec": 0, 00:04:16.914 "r_mbytes_per_sec": 0, 00:04:16.914 "w_mbytes_per_sec": 0 00:04:16.914 }, 00:04:16.914 "claimed": false, 00:04:16.914 "zoned": false, 00:04:16.914 "supported_io_types": { 00:04:16.914 "read": true, 00:04:16.914 "write": true, 00:04:16.914 "unmap": true, 00:04:16.914 "flush": true, 00:04:16.914 "reset": true, 00:04:16.914 "nvme_admin": false, 00:04:16.914 "nvme_io": false, 00:04:16.914 "nvme_io_md": false, 00:04:16.914 "write_zeroes": true, 00:04:16.914 "zcopy": true, 00:04:16.914 "get_zone_info": false, 00:04:16.914 "zone_management": false, 00:04:16.914 "zone_append": false, 00:04:16.914 "compare": false, 00:04:16.914 "compare_and_write": false, 00:04:16.914 "abort": true, 00:04:16.914 "seek_hole": false, 00:04:16.914 "seek_data": false, 00:04:16.914 "copy": true, 00:04:16.914 "nvme_iov_md": false 00:04:16.914 }, 00:04:16.914 "memory_domains": [ 00:04:16.914 { 00:04:16.914 "dma_device_id": "system", 00:04:16.914 "dma_device_type": 1 00:04:16.914 }, 00:04:16.914 { 00:04:16.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.914 "dma_device_type": 2 00:04:16.914 } 00:04:16.914 ], 00:04:16.914 "driver_specific": { 00:04:16.914 "passthru": { 00:04:16.914 "name": "Passthru0", 00:04:16.914 "base_bdev_name": "Malloc0" 00:04:16.914 } 00:04:16.914 } 00:04:16.914 } 00:04:16.914 ]' 00:04:16.914 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.914 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.914 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.914 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.914 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.914 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.914 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:16.914 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.914 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.915 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.915 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.915 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.915 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.915 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.915 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.915 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.915 ************************************ 00:04:16.915 END TEST rpc_integrity 00:04:16.915 ************************************ 00:04:16.915 09:10:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.915 00:04:16.915 real 0m0.226s 00:04:16.915 user 0m0.116s 00:04:16.915 sys 0m0.032s 00:04:16.915 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.915 09:10:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.915 09:10:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:16.915 09:10:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.915 09:10:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.915 09:10:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.915 ************************************ 00:04:16.915 START TEST rpc_plugins 00:04:16.915 ************************************ 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:16.915 09:10:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.915 09:10:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:16.915 09:10:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.915 09:10:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:16.915 { 00:04:16.915 "name": "Malloc1", 00:04:16.915 "aliases": [ 00:04:16.915 "6ee987d5-27b7-428b-901b-002a2ccb51d7" 00:04:16.915 ], 00:04:16.915 "product_name": "Malloc disk", 00:04:16.915 "block_size": 4096, 00:04:16.915 "num_blocks": 256, 00:04:16.915 "uuid": "6ee987d5-27b7-428b-901b-002a2ccb51d7", 00:04:16.915 "assigned_rate_limits": { 00:04:16.915 "rw_ios_per_sec": 0, 00:04:16.915 "rw_mbytes_per_sec": 0, 00:04:16.915 "r_mbytes_per_sec": 0, 00:04:16.915 "w_mbytes_per_sec": 0 00:04:16.915 }, 00:04:16.915 "claimed": false, 00:04:16.915 "zoned": false, 00:04:16.915 "supported_io_types": { 00:04:16.915 "read": true, 00:04:16.915 "write": true, 00:04:16.915 "unmap": true, 00:04:16.915 "flush": true, 00:04:16.915 "reset": true, 00:04:16.915 "nvme_admin": false, 00:04:16.915 "nvme_io": false, 00:04:16.915 "nvme_io_md": false, 00:04:16.915 "write_zeroes": true, 00:04:16.915 "zcopy": true, 00:04:16.915 "get_zone_info": false, 00:04:16.915 "zone_management": false, 00:04:16.915 "zone_append": false, 00:04:16.915 "compare": false, 00:04:16.915 "compare_and_write": false, 00:04:16.915 "abort": true, 00:04:16.915 "seek_hole": false, 00:04:16.915 "seek_data": false, 00:04:16.915 "copy": true, 00:04:16.915 "nvme_iov_md": false 00:04:16.915 }, 00:04:16.915 "memory_domains": [ 00:04:16.915 { 00:04:16.915 "dma_device_id": "system", 00:04:16.915 "dma_device_type": 1 00:04:16.915 }, 00:04:16.915 { 00:04:16.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.915 "dma_device_type": 2 00:04:16.915 } 00:04:16.915 ], 00:04:16.915 "driver_specific": {} 00:04:16.915 } 00:04:16.915 ]' 00:04:16.915 09:10:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:16.915 09:10:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:16.915 09:10:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.915 09:10:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.915 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.915 09:10:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:16.915 09:10:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:17.173 ************************************ 00:04:17.173 END TEST rpc_plugins 00:04:17.173 ************************************ 00:04:17.173 09:10:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:17.173 00:04:17.173 real 0m0.117s 00:04:17.173 user 0m0.067s 00:04:17.173 sys 0m0.015s 00:04:17.173 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.173 09:10:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.173 09:10:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:17.173 09:10:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.173 09:10:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.173 09:10:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.173 ************************************ 00:04:17.173 START TEST rpc_trace_cmd_test 00:04:17.173 ************************************ 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:17.173 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57469", 00:04:17.173 "tpoint_group_mask": "0x8", 00:04:17.173 "iscsi_conn": { 00:04:17.173 "mask": "0x2", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "scsi": { 00:04:17.173 "mask": "0x4", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "bdev": { 00:04:17.173 "mask": "0x8", 00:04:17.173 "tpoint_mask": "0xffffffffffffffff" 00:04:17.173 }, 00:04:17.173 "nvmf_rdma": { 00:04:17.173 "mask": "0x10", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "nvmf_tcp": { 00:04:17.173 "mask": "0x20", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "ftl": { 00:04:17.173 "mask": "0x40", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "blobfs": { 00:04:17.173 "mask": "0x80", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "dsa": { 00:04:17.173 "mask": "0x200", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "thread": { 00:04:17.173 "mask": "0x400", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "nvme_pcie": { 00:04:17.173 "mask": "0x800", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "iaa": { 00:04:17.173 "mask": "0x1000", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "nvme_tcp": { 00:04:17.173 "mask": "0x2000", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "bdev_nvme": { 00:04:17.173 "mask": "0x4000", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "sock": { 00:04:17.173 "mask": "0x8000", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "blob": { 00:04:17.173 "mask": "0x10000", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "bdev_raid": { 00:04:17.173 "mask": "0x20000", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 }, 00:04:17.173 "scheduler": { 00:04:17.173 "mask": "0x40000", 00:04:17.173 "tpoint_mask": "0x0" 00:04:17.173 } 00:04:17.173 }' 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:17.173 ************************************ 00:04:17.173 END TEST rpc_trace_cmd_test 00:04:17.173 ************************************ 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:17.173 00:04:17.173 real 0m0.162s 00:04:17.173 user 0m0.126s 00:04:17.173 sys 0m0.026s 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.173 09:10:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.173 09:10:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:17.173 09:10:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:17.173 09:10:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:17.173 09:10:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.173 09:10:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.173 09:10:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.432 ************************************ 00:04:17.432 START TEST rpc_daemon_integrity 00:04:17.432 ************************************ 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.432 { 00:04:17.432 "name": "Malloc2", 00:04:17.432 "aliases": [ 00:04:17.432 "9ca74e8d-45fa-43e0-8a87-0b8eb518ea08" 00:04:17.432 ], 00:04:17.432 "product_name": "Malloc disk", 00:04:17.432 "block_size": 512, 00:04:17.432 "num_blocks": 16384, 00:04:17.432 "uuid": "9ca74e8d-45fa-43e0-8a87-0b8eb518ea08", 00:04:17.432 "assigned_rate_limits": { 00:04:17.432 "rw_ios_per_sec": 0, 00:04:17.432 "rw_mbytes_per_sec": 0, 00:04:17.432 "r_mbytes_per_sec": 0, 00:04:17.432 "w_mbytes_per_sec": 0 00:04:17.432 }, 00:04:17.432 "claimed": false, 00:04:17.432 "zoned": false, 00:04:17.432 "supported_io_types": { 00:04:17.432 "read": true, 00:04:17.432 "write": true, 00:04:17.432 "unmap": true, 00:04:17.432 "flush": true, 00:04:17.432 "reset": true, 00:04:17.432 "nvme_admin": false, 00:04:17.432 "nvme_io": false, 00:04:17.432 "nvme_io_md": false, 00:04:17.432 "write_zeroes": true, 00:04:17.432 "zcopy": true, 00:04:17.432 "get_zone_info": false, 00:04:17.432 "zone_management": false, 00:04:17.432 "zone_append": false, 00:04:17.432 "compare": false, 00:04:17.432 "compare_and_write": false, 00:04:17.432 "abort": true, 00:04:17.432 "seek_hole": false, 00:04:17.432 "seek_data": false, 00:04:17.432 "copy": true, 00:04:17.432 "nvme_iov_md": false 00:04:17.432 }, 00:04:17.432 "memory_domains": [ 00:04:17.432 { 00:04:17.432 "dma_device_id": "system", 00:04:17.432 "dma_device_type": 1 00:04:17.432 }, 00:04:17.432 { 00:04:17.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.432 "dma_device_type": 2 00:04:17.432 } 00:04:17.432 ], 00:04:17.432 "driver_specific": {} 00:04:17.432 } 00:04:17.432 ]' 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.432 [2024-10-08 09:10:08.963765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:17.432 [2024-10-08 09:10:08.963820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.432 [2024-10-08 09:10:08.963839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:17.432 [2024-10-08 09:10:08.963848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.432 [2024-10-08 09:10:08.965631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.432 [2024-10-08 09:10:08.965752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.432 Passthru0 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.432 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.432 { 00:04:17.432 "name": "Malloc2", 00:04:17.432 "aliases": [ 00:04:17.432 "9ca74e8d-45fa-43e0-8a87-0b8eb518ea08" 00:04:17.432 ], 00:04:17.432 "product_name": "Malloc disk", 00:04:17.432 "block_size": 512, 00:04:17.432 "num_blocks": 16384, 00:04:17.432 "uuid": "9ca74e8d-45fa-43e0-8a87-0b8eb518ea08", 00:04:17.432 "assigned_rate_limits": { 00:04:17.432 "rw_ios_per_sec": 0, 00:04:17.432 "rw_mbytes_per_sec": 0, 00:04:17.432 "r_mbytes_per_sec": 0, 00:04:17.432 "w_mbytes_per_sec": 0 00:04:17.432 }, 00:04:17.432 "claimed": true, 00:04:17.432 "claim_type": "exclusive_write", 00:04:17.432 "zoned": false, 00:04:17.432 "supported_io_types": { 00:04:17.432 "read": true, 00:04:17.432 "write": true, 00:04:17.432 "unmap": true, 00:04:17.432 "flush": true, 00:04:17.432 "reset": true, 00:04:17.432 "nvme_admin": false, 00:04:17.432 "nvme_io": false, 00:04:17.432 "nvme_io_md": false, 00:04:17.432 "write_zeroes": true, 00:04:17.432 "zcopy": true, 00:04:17.432 "get_zone_info": false, 00:04:17.432 "zone_management": false, 00:04:17.432 "zone_append": false, 00:04:17.432 "compare": false, 00:04:17.432 "compare_and_write": false, 00:04:17.432 "abort": true, 00:04:17.432 "seek_hole": false, 00:04:17.432 "seek_data": false, 00:04:17.432 "copy": true, 00:04:17.432 "nvme_iov_md": false 00:04:17.432 }, 00:04:17.432 "memory_domains": [ 00:04:17.432 { 00:04:17.432 "dma_device_id": "system", 00:04:17.432 "dma_device_type": 1 00:04:17.432 }, 00:04:17.432 { 00:04:17.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.432 "dma_device_type": 2 00:04:17.432 } 00:04:17.432 ], 00:04:17.432 "driver_specific": {} 00:04:17.432 }, 00:04:17.432 { 00:04:17.432 "name": "Passthru0", 00:04:17.432 "aliases": [ 00:04:17.432 "3ceabaf4-4428-564f-9bfe-d559442fcf39" 00:04:17.432 ], 00:04:17.432 "product_name": "passthru", 00:04:17.432 "block_size": 512, 00:04:17.432 "num_blocks": 16384, 00:04:17.432 "uuid": "3ceabaf4-4428-564f-9bfe-d559442fcf39", 00:04:17.432 "assigned_rate_limits": { 00:04:17.432 "rw_ios_per_sec": 0, 00:04:17.432 "rw_mbytes_per_sec": 0, 00:04:17.432 "r_mbytes_per_sec": 0, 00:04:17.432 "w_mbytes_per_sec": 0 00:04:17.432 }, 00:04:17.432 "claimed": false, 00:04:17.432 "zoned": false, 00:04:17.432 "supported_io_types": { 00:04:17.432 "read": true, 00:04:17.432 "write": true, 00:04:17.432 "unmap": true, 00:04:17.432 "flush": true, 00:04:17.432 "reset": true, 00:04:17.432 "nvme_admin": false, 00:04:17.432 "nvme_io": false, 00:04:17.432 "nvme_io_md": false, 00:04:17.432 "write_zeroes": true, 00:04:17.432 "zcopy": true, 00:04:17.432 "get_zone_info": false, 00:04:17.432 "zone_management": false, 00:04:17.432 "zone_append": false, 00:04:17.432 "compare": false, 00:04:17.432 "compare_and_write": false, 00:04:17.432 "abort": true, 00:04:17.432 "seek_hole": false, 00:04:17.432 "seek_data": false, 00:04:17.433 "copy": true, 00:04:17.433 "nvme_iov_md": false 00:04:17.433 }, 00:04:17.433 "memory_domains": [ 00:04:17.433 { 00:04:17.433 "dma_device_id": "system", 00:04:17.433 "dma_device_type": 1 00:04:17.433 }, 00:04:17.433 { 00:04:17.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.433 "dma_device_type": 2 00:04:17.433 } 00:04:17.433 ], 00:04:17.433 "driver_specific": { 00:04:17.433 "passthru": { 00:04:17.433 "name": "Passthru0", 00:04:17.433 "base_bdev_name": "Malloc2" 00:04:17.433 } 00:04:17.433 } 00:04:17.433 } 00:04:17.433 ]' 00:04:17.433 09:10:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.433 ************************************ 00:04:17.433 END TEST rpc_daemon_integrity 00:04:17.433 ************************************ 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.433 00:04:17.433 real 0m0.230s 00:04:17.433 user 0m0.125s 00:04:17.433 sys 0m0.031s 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.433 09:10:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.691 09:10:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:17.691 09:10:09 rpc -- rpc/rpc.sh@84 -- # killprocess 57469 00:04:17.691 09:10:09 rpc -- common/autotest_common.sh@950 -- # '[' -z 57469 ']' 00:04:17.691 09:10:09 rpc -- common/autotest_common.sh@954 -- # kill -0 57469 00:04:17.691 09:10:09 rpc -- common/autotest_common.sh@955 -- # uname 00:04:17.691 09:10:09 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:17.691 09:10:09 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57469 00:04:17.691 killing process with pid 57469 00:04:17.691 09:10:09 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:17.691 09:10:09 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:17.691 09:10:09 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57469' 00:04:17.691 09:10:09 rpc -- common/autotest_common.sh@969 -- # kill 57469 00:04:17.691 09:10:09 rpc -- common/autotest_common.sh@974 -- # wait 57469 00:04:19.066 ************************************ 00:04:19.066 END TEST rpc 00:04:19.066 ************************************ 00:04:19.066 00:04:19.066 real 0m3.258s 00:04:19.066 user 0m3.650s 00:04:19.066 sys 0m0.597s 00:04:19.066 09:10:10 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.066 09:10:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.066 09:10:10 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.066 09:10:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.066 09:10:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.066 09:10:10 -- common/autotest_common.sh@10 -- # set +x 00:04:19.066 ************************************ 00:04:19.066 START TEST skip_rpc 00:04:19.066 ************************************ 00:04:19.066 09:10:10 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.066 * Looking for test storage... 00:04:19.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.066 09:10:10 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:19.066 09:10:10 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:19.066 09:10:10 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:19.066 09:10:10 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.066 09:10:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:19.066 09:10:10 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.066 09:10:10 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:19.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.066 --rc genhtml_branch_coverage=1 00:04:19.066 --rc genhtml_function_coverage=1 00:04:19.066 --rc genhtml_legend=1 00:04:19.066 --rc geninfo_all_blocks=1 00:04:19.066 --rc geninfo_unexecuted_blocks=1 00:04:19.066 00:04:19.066 ' 00:04:19.066 09:10:10 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:19.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.066 --rc genhtml_branch_coverage=1 00:04:19.066 --rc genhtml_function_coverage=1 00:04:19.066 --rc genhtml_legend=1 00:04:19.066 --rc geninfo_all_blocks=1 00:04:19.066 --rc geninfo_unexecuted_blocks=1 00:04:19.066 00:04:19.066 ' 00:04:19.066 09:10:10 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:19.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.066 --rc genhtml_branch_coverage=1 00:04:19.066 --rc genhtml_function_coverage=1 00:04:19.066 --rc genhtml_legend=1 00:04:19.066 --rc geninfo_all_blocks=1 00:04:19.066 --rc geninfo_unexecuted_blocks=1 00:04:19.066 00:04:19.066 ' 00:04:19.067 09:10:10 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:19.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.067 --rc genhtml_branch_coverage=1 00:04:19.067 --rc genhtml_function_coverage=1 00:04:19.067 --rc genhtml_legend=1 00:04:19.067 --rc geninfo_all_blocks=1 00:04:19.067 --rc geninfo_unexecuted_blocks=1 00:04:19.067 00:04:19.067 ' 00:04:19.067 09:10:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.067 09:10:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:19.067 09:10:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:19.067 09:10:10 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.067 09:10:10 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.067 09:10:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.067 ************************************ 00:04:19.067 START TEST skip_rpc 00:04:19.067 ************************************ 00:04:19.067 09:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:19.067 09:10:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57676 00:04:19.067 09:10:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:19.067 09:10:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.067 09:10:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:19.067 [2024-10-08 09:10:10.694947] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:19.067 [2024-10-08 09:10:10.695280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57676 ] 00:04:19.325 [2024-10-08 09:10:10.841515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.325 [2024-10-08 09:10:10.997790] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57676 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57676 ']' 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57676 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57676 00:04:24.590 killing process with pid 57676 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57676' 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57676 00:04:24.590 09:10:15 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57676 00:04:25.542 00:04:25.542 real 0m6.321s 00:04:25.542 user 0m5.957s 00:04:25.542 sys 0m0.258s 00:04:25.542 09:10:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.542 09:10:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.542 ************************************ 00:04:25.542 END TEST skip_rpc 00:04:25.542 ************************************ 00:04:25.542 09:10:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:25.542 09:10:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.542 09:10:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.542 09:10:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.542 ************************************ 00:04:25.542 START TEST skip_rpc_with_json 00:04:25.542 ************************************ 00:04:25.542 09:10:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:25.542 09:10:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:25.542 09:10:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57775 00:04:25.542 09:10:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.542 09:10:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.542 09:10:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57775 00:04:25.542 09:10:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57775 ']' 00:04:25.542 09:10:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.542 09:10:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:25.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.542 09:10:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.542 09:10:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:25.542 09:10:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.542 [2024-10-08 09:10:17.060776] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:25.542 [2024-10-08 09:10:17.060941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57775 ] 00:04:25.542 [2024-10-08 09:10:17.220453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.802 [2024-10-08 09:10:17.404832] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.367 09:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:26.367 09:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:26.367 09:10:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:26.367 09:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.367 09:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.367 request: 00:04:26.367 { 00:04:26.367 "trtype": "tcp", 00:04:26.367 "method": "nvmf_get_transports", 00:04:26.367 "req_id": 1 00:04:26.367 } 00:04:26.367 Got JSON-RPC error response 00:04:26.367 response: 00:04:26.367 { 00:04:26.367 "code": -19, 00:04:26.367 "message": "No such device" 00:04:26.367 } 00:04:26.367 [2024-10-08 09:10:17.996422] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:26.367 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:26.367 09:10:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:26.367 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.367 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.367 [2024-10-08 09:10:18.004510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.367 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.367 09:10:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:26.367 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.367 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.625 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.625 09:10:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.625 { 00:04:26.625 "subsystems": [ 00:04:26.625 { 00:04:26.625 "subsystem": "fsdev", 00:04:26.625 "config": [ 00:04:26.625 { 00:04:26.625 "method": "fsdev_set_opts", 00:04:26.625 "params": { 00:04:26.625 "fsdev_io_pool_size": 65535, 00:04:26.625 "fsdev_io_cache_size": 256 00:04:26.625 } 00:04:26.625 } 00:04:26.625 ] 00:04:26.625 }, 00:04:26.625 { 00:04:26.625 "subsystem": "keyring", 00:04:26.625 "config": [] 00:04:26.625 }, 00:04:26.625 { 00:04:26.625 "subsystem": "iobuf", 00:04:26.625 "config": [ 00:04:26.625 { 00:04:26.625 "method": "iobuf_set_options", 00:04:26.625 "params": { 00:04:26.625 "small_pool_count": 8192, 00:04:26.625 "large_pool_count": 1024, 00:04:26.625 "small_bufsize": 8192, 00:04:26.625 "large_bufsize": 135168 00:04:26.625 } 00:04:26.625 } 00:04:26.625 ] 00:04:26.625 }, 00:04:26.625 { 00:04:26.625 "subsystem": "sock", 00:04:26.625 "config": [ 00:04:26.625 { 00:04:26.625 "method": "sock_set_default_impl", 00:04:26.625 "params": { 00:04:26.625 "impl_name": "posix" 00:04:26.625 } 00:04:26.625 }, 00:04:26.625 { 00:04:26.625 "method": "sock_impl_set_options", 00:04:26.625 "params": { 00:04:26.625 "impl_name": "ssl", 00:04:26.625 "recv_buf_size": 4096, 00:04:26.625 "send_buf_size": 4096, 00:04:26.625 "enable_recv_pipe": true, 00:04:26.625 "enable_quickack": false, 00:04:26.625 "enable_placement_id": 0, 00:04:26.625 "enable_zerocopy_send_server": true, 00:04:26.625 "enable_zerocopy_send_client": false, 00:04:26.625 "zerocopy_threshold": 0, 00:04:26.625 "tls_version": 0, 00:04:26.625 "enable_ktls": false 00:04:26.625 } 00:04:26.625 }, 00:04:26.625 { 00:04:26.625 "method": "sock_impl_set_options", 00:04:26.625 "params": { 00:04:26.625 "impl_name": "posix", 00:04:26.625 "recv_buf_size": 2097152, 00:04:26.625 "send_buf_size": 2097152, 00:04:26.625 "enable_recv_pipe": true, 00:04:26.625 "enable_quickack": false, 00:04:26.625 "enable_placement_id": 0, 00:04:26.625 "enable_zerocopy_send_server": true, 00:04:26.625 "enable_zerocopy_send_client": false, 00:04:26.625 "zerocopy_threshold": 0, 00:04:26.625 "tls_version": 0, 00:04:26.625 "enable_ktls": false 00:04:26.625 } 00:04:26.625 } 00:04:26.625 ] 00:04:26.625 }, 00:04:26.625 { 00:04:26.625 "subsystem": "vmd", 00:04:26.625 "config": [] 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "subsystem": "accel", 00:04:26.626 "config": [ 00:04:26.626 { 00:04:26.626 "method": "accel_set_options", 00:04:26.626 "params": { 00:04:26.626 "small_cache_size": 128, 00:04:26.626 "large_cache_size": 16, 00:04:26.626 "task_count": 2048, 00:04:26.626 "sequence_count": 2048, 00:04:26.626 "buf_count": 2048 00:04:26.626 } 00:04:26.626 } 00:04:26.626 ] 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "subsystem": "bdev", 00:04:26.626 "config": [ 00:04:26.626 { 00:04:26.626 "method": "bdev_set_options", 00:04:26.626 "params": { 00:04:26.626 "bdev_io_pool_size": 65535, 00:04:26.626 "bdev_io_cache_size": 256, 00:04:26.626 "bdev_auto_examine": true, 00:04:26.626 "iobuf_small_cache_size": 128, 00:04:26.626 "iobuf_large_cache_size": 16 00:04:26.626 } 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "method": "bdev_raid_set_options", 00:04:26.626 "params": { 00:04:26.626 "process_window_size_kb": 1024, 00:04:26.626 "process_max_bandwidth_mb_sec": 0 00:04:26.626 } 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "method": "bdev_iscsi_set_options", 00:04:26.626 "params": { 00:04:26.626 "timeout_sec": 30 00:04:26.626 } 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "method": "bdev_nvme_set_options", 00:04:26.626 "params": { 00:04:26.626 "action_on_timeout": "none", 00:04:26.626 "timeout_us": 0, 00:04:26.626 "timeout_admin_us": 0, 00:04:26.626 "keep_alive_timeout_ms": 10000, 00:04:26.626 "arbitration_burst": 0, 00:04:26.626 "low_priority_weight": 0, 00:04:26.626 "medium_priority_weight": 0, 00:04:26.626 "high_priority_weight": 0, 00:04:26.626 "nvme_adminq_poll_period_us": 10000, 00:04:26.626 "nvme_ioq_poll_period_us": 0, 00:04:26.626 "io_queue_requests": 0, 00:04:26.626 "delay_cmd_submit": true, 00:04:26.626 "transport_retry_count": 4, 00:04:26.626 "bdev_retry_count": 3, 00:04:26.626 "transport_ack_timeout": 0, 00:04:26.626 "ctrlr_loss_timeout_sec": 0, 00:04:26.626 "reconnect_delay_sec": 0, 00:04:26.626 "fast_io_fail_timeout_sec": 0, 00:04:26.626 "disable_auto_failback": false, 00:04:26.626 "generate_uuids": false, 00:04:26.626 "transport_tos": 0, 00:04:26.626 "nvme_error_stat": false, 00:04:26.626 "rdma_srq_size": 0, 00:04:26.626 "io_path_stat": false, 00:04:26.626 "allow_accel_sequence": false, 00:04:26.626 "rdma_max_cq_size": 0, 00:04:26.626 "rdma_cm_event_timeout_ms": 0, 00:04:26.626 "dhchap_digests": [ 00:04:26.626 "sha256", 00:04:26.626 "sha384", 00:04:26.626 "sha512" 00:04:26.626 ], 00:04:26.626 "dhchap_dhgroups": [ 00:04:26.626 "null", 00:04:26.626 "ffdhe2048", 00:04:26.626 "ffdhe3072", 00:04:26.626 "ffdhe4096", 00:04:26.626 "ffdhe6144", 00:04:26.626 "ffdhe8192" 00:04:26.626 ] 00:04:26.626 } 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "method": "bdev_nvme_set_hotplug", 00:04:26.626 "params": { 00:04:26.626 "period_us": 100000, 00:04:26.626 "enable": false 00:04:26.626 } 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "method": "bdev_wait_for_examine" 00:04:26.626 } 00:04:26.626 ] 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "subsystem": "scsi", 00:04:26.626 "config": null 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "subsystem": "scheduler", 00:04:26.626 "config": [ 00:04:26.626 { 00:04:26.626 "method": "framework_set_scheduler", 00:04:26.626 "params": { 00:04:26.626 "name": "static" 00:04:26.626 } 00:04:26.626 } 00:04:26.626 ] 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "subsystem": "vhost_scsi", 00:04:26.626 "config": [] 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "subsystem": "vhost_blk", 00:04:26.626 "config": [] 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "subsystem": "ublk", 00:04:26.626 "config": [] 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "subsystem": "nbd", 00:04:26.626 "config": [] 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "subsystem": "nvmf", 00:04:26.626 "config": [ 00:04:26.626 { 00:04:26.626 "method": "nvmf_set_config", 00:04:26.626 "params": { 00:04:26.626 "discovery_filter": "match_any", 00:04:26.626 "admin_cmd_passthru": { 00:04:26.626 "identify_ctrlr": false 00:04:26.626 }, 00:04:26.626 "dhchap_digests": [ 00:04:26.626 "sha256", 00:04:26.626 "sha384", 00:04:26.626 "sha512" 00:04:26.626 ], 00:04:26.626 "dhchap_dhgroups": [ 00:04:26.626 "null", 00:04:26.626 "ffdhe2048", 00:04:26.626 "ffdhe3072", 00:04:26.626 "ffdhe4096", 00:04:26.626 "ffdhe6144", 00:04:26.626 "ffdhe8192" 00:04:26.626 ] 00:04:26.626 } 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "method": "nvmf_set_max_subsystems", 00:04:26.626 "params": { 00:04:26.626 "max_subsystems": 1024 00:04:26.626 } 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "method": "nvmf_set_crdt", 00:04:26.626 "params": { 00:04:26.626 "crdt1": 0, 00:04:26.626 "crdt2": 0, 00:04:26.626 "crdt3": 0 00:04:26.626 } 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "method": "nvmf_create_transport", 00:04:26.626 "params": { 00:04:26.626 "trtype": "TCP", 00:04:26.626 "max_queue_depth": 128, 00:04:26.626 "max_io_qpairs_per_ctrlr": 127, 00:04:26.626 "in_capsule_data_size": 4096, 00:04:26.626 "max_io_size": 131072, 00:04:26.626 "io_unit_size": 131072, 00:04:26.626 "max_aq_depth": 128, 00:04:26.626 "num_shared_buffers": 511, 00:04:26.626 "buf_cache_size": 4294967295, 00:04:26.626 "dif_insert_or_strip": false, 00:04:26.626 "zcopy": false, 00:04:26.626 "c2h_success": true, 00:04:26.626 "sock_priority": 0, 00:04:26.626 "abort_timeout_sec": 1, 00:04:26.626 "ack_timeout": 0, 00:04:26.626 "data_wr_pool_size": 0 00:04:26.626 } 00:04:26.626 } 00:04:26.626 ] 00:04:26.626 }, 00:04:26.626 { 00:04:26.626 "subsystem": "iscsi", 00:04:26.626 "config": [ 00:04:26.626 { 00:04:26.626 "method": "iscsi_set_options", 00:04:26.626 "params": { 00:04:26.626 "node_base": "iqn.2016-06.io.spdk", 00:04:26.626 "max_sessions": 128, 00:04:26.626 "max_connections_per_session": 2, 00:04:26.626 "max_queue_depth": 64, 00:04:26.626 "default_time2wait": 2, 00:04:26.626 "default_time2retain": 20, 00:04:26.626 "first_burst_length": 8192, 00:04:26.626 "immediate_data": true, 00:04:26.626 "allow_duplicated_isid": false, 00:04:26.626 "error_recovery_level": 0, 00:04:26.626 "nop_timeout": 60, 00:04:26.626 "nop_in_interval": 30, 00:04:26.626 "disable_chap": false, 00:04:26.626 "require_chap": false, 00:04:26.626 "mutual_chap": false, 00:04:26.626 "chap_group": 0, 00:04:26.626 "max_large_datain_per_connection": 64, 00:04:26.626 "max_r2t_per_connection": 4, 00:04:26.626 "pdu_pool_size": 36864, 00:04:26.626 "immediate_data_pool_size": 16384, 00:04:26.626 "data_out_pool_size": 2048 00:04:26.626 } 00:04:26.626 } 00:04:26.626 ] 00:04:26.626 } 00:04:26.626 ] 00:04:26.626 } 00:04:26.626 09:10:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:26.626 09:10:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57775 00:04:26.626 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57775 ']' 00:04:26.626 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57775 00:04:26.626 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:26.626 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:26.626 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57775 00:04:26.626 killing process with pid 57775 00:04:26.626 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:26.626 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:26.626 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57775' 00:04:26.626 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57775 00:04:26.626 09:10:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57775 00:04:28.050 09:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57814 00:04:28.050 09:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:28.050 09:10:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:33.319 09:10:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57814 00:04:33.319 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57814 ']' 00:04:33.319 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57814 00:04:33.319 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:33.319 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:33.319 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57814 00:04:33.319 killing process with pid 57814 00:04:33.319 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:33.319 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:33.319 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57814' 00:04:33.319 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57814 00:04:33.319 09:10:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57814 00:04:34.702 09:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:34.702 09:10:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:34.702 ************************************ 00:04:34.702 END TEST skip_rpc_with_json 00:04:34.702 ************************************ 00:04:34.702 00:04:34.702 real 0m9.001s 00:04:34.702 user 0m8.641s 00:04:34.702 sys 0m0.631s 00:04:34.702 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.702 09:10:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.702 09:10:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:34.702 09:10:26 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.702 09:10:26 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.702 09:10:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.702 ************************************ 00:04:34.702 START TEST skip_rpc_with_delay 00:04:34.702 ************************************ 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.702 [2024-10-08 09:10:26.098174] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:34.702 [2024-10-08 09:10:26.098305] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:34.702 ************************************ 00:04:34.702 END TEST skip_rpc_with_delay 00:04:34.702 ************************************ 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:34.702 00:04:34.702 real 0m0.124s 00:04:34.702 user 0m0.066s 00:04:34.702 sys 0m0.057s 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.702 09:10:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:34.702 09:10:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:34.702 09:10:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:34.702 09:10:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:34.702 09:10:26 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.702 09:10:26 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.702 09:10:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.702 ************************************ 00:04:34.702 START TEST exit_on_failed_rpc_init 00:04:34.702 ************************************ 00:04:34.702 09:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:34.702 09:10:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57937 00:04:34.702 09:10:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57937 00:04:34.702 09:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57937 ']' 00:04:34.702 09:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.702 09:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:34.702 09:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.702 09:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:34.702 09:10:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.702 09:10:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.702 [2024-10-08 09:10:26.267784] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:34.702 [2024-10-08 09:10:26.267915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57937 ] 00:04:34.962 [2024-10-08 09:10:26.413446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.962 [2024-10-08 09:10:26.569278] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.532 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:35.533 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.533 [2024-10-08 09:10:27.126785] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:35.533 [2024-10-08 09:10:27.126886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57955 ] 00:04:35.808 [2024-10-08 09:10:27.269135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.808 [2024-10-08 09:10:27.400979] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.808 [2024-10-08 09:10:27.401060] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:35.808 [2024-10-08 09:10:27.401071] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:35.808 [2024-10-08 09:10:27.401080] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57937 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57937 ']' 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57937 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57937 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.080 killing process with pid 57937 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57937' 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57937 00:04:36.080 09:10:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57937 00:04:37.486 00:04:37.486 real 0m2.787s 00:04:37.486 user 0m3.127s 00:04:37.486 sys 0m0.408s 00:04:37.486 09:10:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.486 09:10:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.486 ************************************ 00:04:37.486 END TEST exit_on_failed_rpc_init 00:04:37.486 ************************************ 00:04:37.486 09:10:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.486 00:04:37.486 real 0m18.538s 00:04:37.486 user 0m17.932s 00:04:37.486 sys 0m1.518s 00:04:37.486 09:10:29 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.486 09:10:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.486 ************************************ 00:04:37.486 END TEST skip_rpc 00:04:37.486 ************************************ 00:04:37.486 09:10:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:37.486 09:10:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.486 09:10:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.486 09:10:29 -- common/autotest_common.sh@10 -- # set +x 00:04:37.486 ************************************ 00:04:37.486 START TEST rpc_client 00:04:37.486 ************************************ 00:04:37.486 09:10:29 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:37.486 * Looking for test storage... 00:04:37.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:37.486 09:10:29 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:37.486 09:10:29 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:37.486 09:10:29 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:37.744 09:10:29 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:37.744 09:10:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.745 09:10:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:37.745 09:10:29 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.745 09:10:29 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:37.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.745 --rc genhtml_branch_coverage=1 00:04:37.745 --rc genhtml_function_coverage=1 00:04:37.745 --rc genhtml_legend=1 00:04:37.745 --rc geninfo_all_blocks=1 00:04:37.745 --rc geninfo_unexecuted_blocks=1 00:04:37.745 00:04:37.745 ' 00:04:37.745 09:10:29 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:37.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.745 --rc genhtml_branch_coverage=1 00:04:37.745 --rc genhtml_function_coverage=1 00:04:37.745 --rc genhtml_legend=1 00:04:37.745 --rc geninfo_all_blocks=1 00:04:37.745 --rc geninfo_unexecuted_blocks=1 00:04:37.745 00:04:37.745 ' 00:04:37.745 09:10:29 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:37.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.745 --rc genhtml_branch_coverage=1 00:04:37.745 --rc genhtml_function_coverage=1 00:04:37.745 --rc genhtml_legend=1 00:04:37.745 --rc geninfo_all_blocks=1 00:04:37.745 --rc geninfo_unexecuted_blocks=1 00:04:37.745 00:04:37.745 ' 00:04:37.745 09:10:29 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:37.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.745 --rc genhtml_branch_coverage=1 00:04:37.745 --rc genhtml_function_coverage=1 00:04:37.745 --rc genhtml_legend=1 00:04:37.745 --rc geninfo_all_blocks=1 00:04:37.745 --rc geninfo_unexecuted_blocks=1 00:04:37.745 00:04:37.745 ' 00:04:37.745 09:10:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:37.745 OK 00:04:37.745 09:10:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:37.745 ************************************ 00:04:37.745 END TEST rpc_client 00:04:37.745 ************************************ 00:04:37.745 00:04:37.745 real 0m0.192s 00:04:37.745 user 0m0.113s 00:04:37.745 sys 0m0.087s 00:04:37.745 09:10:29 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.745 09:10:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:37.745 09:10:29 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:37.745 09:10:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.745 09:10:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.745 09:10:29 -- common/autotest_common.sh@10 -- # set +x 00:04:37.745 ************************************ 00:04:37.745 START TEST json_config 00:04:37.745 ************************************ 00:04:37.745 09:10:29 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:37.745 09:10:29 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:37.745 09:10:29 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:37.745 09:10:29 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:37.745 09:10:29 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:37.745 09:10:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.745 09:10:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.745 09:10:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.745 09:10:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.745 09:10:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.745 09:10:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.745 09:10:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.745 09:10:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.745 09:10:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.745 09:10:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.745 09:10:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.745 09:10:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:37.745 09:10:29 json_config -- scripts/common.sh@345 -- # : 1 00:04:37.745 09:10:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.745 09:10:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.745 09:10:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:37.745 09:10:29 json_config -- scripts/common.sh@353 -- # local d=1 00:04:37.745 09:10:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.745 09:10:29 json_config -- scripts/common.sh@355 -- # echo 1 00:04:37.745 09:10:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.745 09:10:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:37.745 09:10:29 json_config -- scripts/common.sh@353 -- # local d=2 00:04:37.745 09:10:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.745 09:10:29 json_config -- scripts/common.sh@355 -- # echo 2 00:04:37.745 09:10:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.745 09:10:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.745 09:10:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.745 09:10:29 json_config -- scripts/common.sh@368 -- # return 0 00:04:37.745 09:10:29 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.745 09:10:29 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:37.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.745 --rc genhtml_branch_coverage=1 00:04:37.745 --rc genhtml_function_coverage=1 00:04:37.745 --rc genhtml_legend=1 00:04:37.745 --rc geninfo_all_blocks=1 00:04:37.745 --rc geninfo_unexecuted_blocks=1 00:04:37.745 00:04:37.745 ' 00:04:37.745 09:10:29 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:37.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.745 --rc genhtml_branch_coverage=1 00:04:37.745 --rc genhtml_function_coverage=1 00:04:37.745 --rc genhtml_legend=1 00:04:37.745 --rc geninfo_all_blocks=1 00:04:37.745 --rc geninfo_unexecuted_blocks=1 00:04:37.745 00:04:37.745 ' 00:04:37.745 09:10:29 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:37.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.745 --rc genhtml_branch_coverage=1 00:04:37.745 --rc genhtml_function_coverage=1 00:04:37.745 --rc genhtml_legend=1 00:04:37.745 --rc geninfo_all_blocks=1 00:04:37.745 --rc geninfo_unexecuted_blocks=1 00:04:37.745 00:04:37.745 ' 00:04:37.745 09:10:29 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:37.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.745 --rc genhtml_branch_coverage=1 00:04:37.745 --rc genhtml_function_coverage=1 00:04:37.745 --rc genhtml_legend=1 00:04:37.745 --rc geninfo_all_blocks=1 00:04:37.745 --rc geninfo_unexecuted_blocks=1 00:04:37.745 00:04:37.745 ' 00:04:37.745 09:10:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3ee1f361-b177-48ce-904d-e6a9a5ba0a2f 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3ee1f361-b177-48ce-904d-e6a9a5ba0a2f 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.745 09:10:29 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:38.004 09:10:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:38.004 09:10:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.004 09:10:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.004 09:10:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.004 09:10:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.004 09:10:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.004 09:10:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.004 09:10:29 json_config -- paths/export.sh@5 -- # export PATH 00:04:38.004 09:10:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.004 09:10:29 json_config -- nvmf/common.sh@51 -- # : 0 00:04:38.004 09:10:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:38.004 09:10:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:38.004 09:10:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.004 09:10:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.004 09:10:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.004 09:10:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:38.004 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:38.004 09:10:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:38.004 09:10:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:38.004 09:10:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:38.004 09:10:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:38.004 09:10:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:38.004 09:10:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:38.004 09:10:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:38.004 09:10:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:38.004 WARNING: No tests are enabled so not running JSON configuration tests 00:04:38.004 09:10:29 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:38.004 09:10:29 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:38.004 00:04:38.004 real 0m0.147s 00:04:38.004 user 0m0.094s 00:04:38.004 sys 0m0.058s 00:04:38.004 09:10:29 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.004 09:10:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.004 ************************************ 00:04:38.004 END TEST json_config 00:04:38.004 ************************************ 00:04:38.004 09:10:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:38.004 09:10:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.004 09:10:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.004 09:10:29 -- common/autotest_common.sh@10 -- # set +x 00:04:38.004 ************************************ 00:04:38.004 START TEST json_config_extra_key 00:04:38.004 ************************************ 00:04:38.004 09:10:29 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:38.004 09:10:29 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:38.004 09:10:29 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:38.004 09:10:29 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:38.005 09:10:29 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:38.005 09:10:29 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.005 09:10:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:38.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.005 --rc genhtml_branch_coverage=1 00:04:38.005 --rc genhtml_function_coverage=1 00:04:38.005 --rc genhtml_legend=1 00:04:38.005 --rc geninfo_all_blocks=1 00:04:38.005 --rc geninfo_unexecuted_blocks=1 00:04:38.005 00:04:38.005 ' 00:04:38.005 09:10:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:38.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.005 --rc genhtml_branch_coverage=1 00:04:38.005 --rc genhtml_function_coverage=1 00:04:38.005 --rc genhtml_legend=1 00:04:38.005 --rc geninfo_all_blocks=1 00:04:38.005 --rc geninfo_unexecuted_blocks=1 00:04:38.005 00:04:38.005 ' 00:04:38.005 09:10:29 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:38.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.005 --rc genhtml_branch_coverage=1 00:04:38.005 --rc genhtml_function_coverage=1 00:04:38.005 --rc genhtml_legend=1 00:04:38.005 --rc geninfo_all_blocks=1 00:04:38.005 --rc geninfo_unexecuted_blocks=1 00:04:38.005 00:04:38.005 ' 00:04:38.005 09:10:29 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:38.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.005 --rc genhtml_branch_coverage=1 00:04:38.005 --rc genhtml_function_coverage=1 00:04:38.005 --rc genhtml_legend=1 00:04:38.005 --rc geninfo_all_blocks=1 00:04:38.005 --rc geninfo_unexecuted_blocks=1 00:04:38.005 00:04:38.005 ' 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3ee1f361-b177-48ce-904d-e6a9a5ba0a2f 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3ee1f361-b177-48ce-904d-e6a9a5ba0a2f 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.005 09:10:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.005 09:10:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.005 09:10:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.005 09:10:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.005 09:10:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:38.005 09:10:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:38.005 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:38.005 09:10:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:38.005 INFO: launching applications... 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:38.005 09:10:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:38.005 09:10:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:38.005 09:10:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:38.005 09:10:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:38.005 09:10:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:38.005 09:10:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:38.005 09:10:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.005 09:10:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.005 09:10:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58148 00:04:38.005 Waiting for target to run... 00:04:38.005 09:10:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:38.005 09:10:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58148 /var/tmp/spdk_tgt.sock 00:04:38.005 09:10:29 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58148 ']' 00:04:38.005 09:10:29 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.005 09:10:29 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.005 09:10:29 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.005 09:10:29 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.006 09:10:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:38.006 09:10:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:38.263 [2024-10-08 09:10:29.687114] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:38.263 [2024-10-08 09:10:29.687247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58148 ] 00:04:38.520 [2024-10-08 09:10:30.006438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.520 [2024-10-08 09:10:30.178643] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.086 09:10:30 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.086 09:10:30 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:39.086 00:04:39.086 09:10:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:39.086 INFO: shutting down applications... 00:04:39.086 09:10:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:39.086 09:10:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:39.086 09:10:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:39.086 09:10:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:39.086 09:10:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58148 ]] 00:04:39.086 09:10:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58148 00:04:39.086 09:10:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:39.086 09:10:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.086 09:10:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58148 00:04:39.086 09:10:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.650 09:10:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.650 09:10:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.650 09:10:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58148 00:04:39.650 09:10:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.217 09:10:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.217 09:10:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.217 09:10:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58148 00:04:40.217 09:10:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.783 09:10:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.783 09:10:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.783 09:10:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58148 00:04:40.783 09:10:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:41.041 09:10:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:41.041 09:10:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.041 09:10:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58148 00:04:41.041 09:10:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:41.041 09:10:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:41.041 09:10:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:41.041 SPDK target shutdown done 00:04:41.041 09:10:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:41.041 Success 00:04:41.041 09:10:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:41.041 00:04:41.041 real 0m3.216s 00:04:41.041 user 0m2.855s 00:04:41.041 sys 0m0.413s 00:04:41.041 09:10:32 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.041 09:10:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.041 ************************************ 00:04:41.041 END TEST json_config_extra_key 00:04:41.041 ************************************ 00:04:41.041 09:10:32 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:41.041 09:10:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.299 09:10:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.299 09:10:32 -- common/autotest_common.sh@10 -- # set +x 00:04:41.299 ************************************ 00:04:41.299 START TEST alias_rpc 00:04:41.299 ************************************ 00:04:41.299 09:10:32 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:41.299 * Looking for test storage... 00:04:41.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:41.299 09:10:32 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:41.299 09:10:32 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:41.299 09:10:32 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:41.299 09:10:32 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.299 09:10:32 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.300 09:10:32 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:41.300 09:10:32 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.300 09:10:32 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:41.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.300 --rc genhtml_branch_coverage=1 00:04:41.300 --rc genhtml_function_coverage=1 00:04:41.300 --rc genhtml_legend=1 00:04:41.300 --rc geninfo_all_blocks=1 00:04:41.300 --rc geninfo_unexecuted_blocks=1 00:04:41.300 00:04:41.300 ' 00:04:41.300 09:10:32 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:41.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.300 --rc genhtml_branch_coverage=1 00:04:41.300 --rc genhtml_function_coverage=1 00:04:41.300 --rc genhtml_legend=1 00:04:41.300 --rc geninfo_all_blocks=1 00:04:41.300 --rc geninfo_unexecuted_blocks=1 00:04:41.300 00:04:41.300 ' 00:04:41.300 09:10:32 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:41.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.300 --rc genhtml_branch_coverage=1 00:04:41.300 --rc genhtml_function_coverage=1 00:04:41.300 --rc genhtml_legend=1 00:04:41.300 --rc geninfo_all_blocks=1 00:04:41.300 --rc geninfo_unexecuted_blocks=1 00:04:41.300 00:04:41.300 ' 00:04:41.300 09:10:32 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:41.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.300 --rc genhtml_branch_coverage=1 00:04:41.300 --rc genhtml_function_coverage=1 00:04:41.300 --rc genhtml_legend=1 00:04:41.300 --rc geninfo_all_blocks=1 00:04:41.300 --rc geninfo_unexecuted_blocks=1 00:04:41.300 00:04:41.300 ' 00:04:41.300 09:10:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:41.300 09:10:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58241 00:04:41.300 09:10:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.300 09:10:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58241 00:04:41.300 09:10:32 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58241 ']' 00:04:41.300 09:10:32 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.300 09:10:32 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.300 09:10:32 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.300 09:10:32 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.300 09:10:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.300 [2024-10-08 09:10:32.950058] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:41.300 [2024-10-08 09:10:32.950641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58241 ] 00:04:41.558 [2024-10-08 09:10:33.098025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.855 [2024-10-08 09:10:33.283381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.421 09:10:33 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:42.421 09:10:33 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:42.421 09:10:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:42.678 09:10:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58241 00:04:42.678 09:10:34 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58241 ']' 00:04:42.678 09:10:34 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58241 00:04:42.678 09:10:34 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:42.678 09:10:34 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.678 09:10:34 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58241 00:04:42.678 09:10:34 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.678 killing process with pid 58241 00:04:42.678 09:10:34 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.678 09:10:34 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58241' 00:04:42.678 09:10:34 alias_rpc -- common/autotest_common.sh@969 -- # kill 58241 00:04:42.678 09:10:34 alias_rpc -- common/autotest_common.sh@974 -- # wait 58241 00:04:44.052 00:04:44.052 real 0m2.952s 00:04:44.053 user 0m3.050s 00:04:44.053 sys 0m0.421s 00:04:44.053 09:10:35 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.053 ************************************ 00:04:44.053 END TEST alias_rpc 00:04:44.053 ************************************ 00:04:44.053 09:10:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.053 09:10:35 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:44.053 09:10:35 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:44.053 09:10:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.053 09:10:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.053 09:10:35 -- common/autotest_common.sh@10 -- # set +x 00:04:44.053 ************************************ 00:04:44.053 START TEST spdkcli_tcp 00:04:44.053 ************************************ 00:04:44.053 09:10:35 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:44.311 * Looking for test storage... 00:04:44.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:44.311 09:10:35 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:44.311 09:10:35 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.312 09:10:35 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:44.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.312 --rc genhtml_branch_coverage=1 00:04:44.312 --rc genhtml_function_coverage=1 00:04:44.312 --rc genhtml_legend=1 00:04:44.312 --rc geninfo_all_blocks=1 00:04:44.312 --rc geninfo_unexecuted_blocks=1 00:04:44.312 00:04:44.312 ' 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:44.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.312 --rc genhtml_branch_coverage=1 00:04:44.312 --rc genhtml_function_coverage=1 00:04:44.312 --rc genhtml_legend=1 00:04:44.312 --rc geninfo_all_blocks=1 00:04:44.312 --rc geninfo_unexecuted_blocks=1 00:04:44.312 00:04:44.312 ' 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:44.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.312 --rc genhtml_branch_coverage=1 00:04:44.312 --rc genhtml_function_coverage=1 00:04:44.312 --rc genhtml_legend=1 00:04:44.312 --rc geninfo_all_blocks=1 00:04:44.312 --rc geninfo_unexecuted_blocks=1 00:04:44.312 00:04:44.312 ' 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:44.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.312 --rc genhtml_branch_coverage=1 00:04:44.312 --rc genhtml_function_coverage=1 00:04:44.312 --rc genhtml_legend=1 00:04:44.312 --rc geninfo_all_blocks=1 00:04:44.312 --rc geninfo_unexecuted_blocks=1 00:04:44.312 00:04:44.312 ' 00:04:44.312 09:10:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:44.312 09:10:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:44.312 09:10:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:44.312 09:10:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:44.312 09:10:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:44.312 09:10:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:44.312 09:10:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.312 09:10:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58337 00:04:44.312 09:10:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58337 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58337 ']' 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.312 09:10:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.312 09:10:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.312 [2024-10-08 09:10:35.943557] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:44.312 [2024-10-08 09:10:35.943689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58337 ] 00:04:44.570 [2024-10-08 09:10:36.095799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.828 [2024-10-08 09:10:36.282043] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.828 [2024-10-08 09:10:36.282381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.395 09:10:36 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.395 09:10:36 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:45.395 09:10:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:45.395 09:10:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58354 00:04:45.395 09:10:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:45.395 [ 00:04:45.395 "bdev_malloc_delete", 00:04:45.395 "bdev_malloc_create", 00:04:45.395 "bdev_null_resize", 00:04:45.395 "bdev_null_delete", 00:04:45.395 "bdev_null_create", 00:04:45.395 "bdev_nvme_cuse_unregister", 00:04:45.395 "bdev_nvme_cuse_register", 00:04:45.395 "bdev_opal_new_user", 00:04:45.395 "bdev_opal_set_lock_state", 00:04:45.395 "bdev_opal_delete", 00:04:45.395 "bdev_opal_get_info", 00:04:45.395 "bdev_opal_create", 00:04:45.395 "bdev_nvme_opal_revert", 00:04:45.395 "bdev_nvme_opal_init", 00:04:45.395 "bdev_nvme_send_cmd", 00:04:45.395 "bdev_nvme_set_keys", 00:04:45.395 "bdev_nvme_get_path_iostat", 00:04:45.395 "bdev_nvme_get_mdns_discovery_info", 00:04:45.395 "bdev_nvme_stop_mdns_discovery", 00:04:45.395 "bdev_nvme_start_mdns_discovery", 00:04:45.395 "bdev_nvme_set_multipath_policy", 00:04:45.395 "bdev_nvme_set_preferred_path", 00:04:45.395 "bdev_nvme_get_io_paths", 00:04:45.395 "bdev_nvme_remove_error_injection", 00:04:45.395 "bdev_nvme_add_error_injection", 00:04:45.395 "bdev_nvme_get_discovery_info", 00:04:45.395 "bdev_nvme_stop_discovery", 00:04:45.395 "bdev_nvme_start_discovery", 00:04:45.395 "bdev_nvme_get_controller_health_info", 00:04:45.395 "bdev_nvme_disable_controller", 00:04:45.395 "bdev_nvme_enable_controller", 00:04:45.395 "bdev_nvme_reset_controller", 00:04:45.395 "bdev_nvme_get_transport_statistics", 00:04:45.395 "bdev_nvme_apply_firmware", 00:04:45.395 "bdev_nvme_detach_controller", 00:04:45.395 "bdev_nvme_get_controllers", 00:04:45.395 "bdev_nvme_attach_controller", 00:04:45.395 "bdev_nvme_set_hotplug", 00:04:45.395 "bdev_nvme_set_options", 00:04:45.395 "bdev_passthru_delete", 00:04:45.395 "bdev_passthru_create", 00:04:45.395 "bdev_lvol_set_parent_bdev", 00:04:45.395 "bdev_lvol_set_parent", 00:04:45.395 "bdev_lvol_check_shallow_copy", 00:04:45.395 "bdev_lvol_start_shallow_copy", 00:04:45.395 "bdev_lvol_grow_lvstore", 00:04:45.395 "bdev_lvol_get_lvols", 00:04:45.395 "bdev_lvol_get_lvstores", 00:04:45.395 "bdev_lvol_delete", 00:04:45.395 "bdev_lvol_set_read_only", 00:04:45.395 "bdev_lvol_resize", 00:04:45.395 "bdev_lvol_decouple_parent", 00:04:45.395 "bdev_lvol_inflate", 00:04:45.395 "bdev_lvol_rename", 00:04:45.395 "bdev_lvol_clone_bdev", 00:04:45.395 "bdev_lvol_clone", 00:04:45.395 "bdev_lvol_snapshot", 00:04:45.395 "bdev_lvol_create", 00:04:45.395 "bdev_lvol_delete_lvstore", 00:04:45.395 "bdev_lvol_rename_lvstore", 00:04:45.395 "bdev_lvol_create_lvstore", 00:04:45.395 "bdev_raid_set_options", 00:04:45.395 "bdev_raid_remove_base_bdev", 00:04:45.395 "bdev_raid_add_base_bdev", 00:04:45.395 "bdev_raid_delete", 00:04:45.395 "bdev_raid_create", 00:04:45.395 "bdev_raid_get_bdevs", 00:04:45.395 "bdev_error_inject_error", 00:04:45.395 "bdev_error_delete", 00:04:45.395 "bdev_error_create", 00:04:45.395 "bdev_split_delete", 00:04:45.395 "bdev_split_create", 00:04:45.395 "bdev_delay_delete", 00:04:45.395 "bdev_delay_create", 00:04:45.395 "bdev_delay_update_latency", 00:04:45.395 "bdev_zone_block_delete", 00:04:45.395 "bdev_zone_block_create", 00:04:45.395 "blobfs_create", 00:04:45.395 "blobfs_detect", 00:04:45.395 "blobfs_set_cache_size", 00:04:45.395 "bdev_xnvme_delete", 00:04:45.395 "bdev_xnvme_create", 00:04:45.395 "bdev_aio_delete", 00:04:45.395 "bdev_aio_rescan", 00:04:45.395 "bdev_aio_create", 00:04:45.395 "bdev_ftl_set_property", 00:04:45.396 "bdev_ftl_get_properties", 00:04:45.396 "bdev_ftl_get_stats", 00:04:45.396 "bdev_ftl_unmap", 00:04:45.396 "bdev_ftl_unload", 00:04:45.396 "bdev_ftl_delete", 00:04:45.396 "bdev_ftl_load", 00:04:45.396 "bdev_ftl_create", 00:04:45.396 "bdev_virtio_attach_controller", 00:04:45.396 "bdev_virtio_scsi_get_devices", 00:04:45.396 "bdev_virtio_detach_controller", 00:04:45.396 "bdev_virtio_blk_set_hotplug", 00:04:45.396 "bdev_iscsi_delete", 00:04:45.396 "bdev_iscsi_create", 00:04:45.396 "bdev_iscsi_set_options", 00:04:45.396 "accel_error_inject_error", 00:04:45.396 "ioat_scan_accel_module", 00:04:45.396 "dsa_scan_accel_module", 00:04:45.396 "iaa_scan_accel_module", 00:04:45.396 "keyring_file_remove_key", 00:04:45.396 "keyring_file_add_key", 00:04:45.396 "keyring_linux_set_options", 00:04:45.396 "fsdev_aio_delete", 00:04:45.396 "fsdev_aio_create", 00:04:45.396 "iscsi_get_histogram", 00:04:45.396 "iscsi_enable_histogram", 00:04:45.396 "iscsi_set_options", 00:04:45.396 "iscsi_get_auth_groups", 00:04:45.396 "iscsi_auth_group_remove_secret", 00:04:45.396 "iscsi_auth_group_add_secret", 00:04:45.396 "iscsi_delete_auth_group", 00:04:45.396 "iscsi_create_auth_group", 00:04:45.396 "iscsi_set_discovery_auth", 00:04:45.396 "iscsi_get_options", 00:04:45.396 "iscsi_target_node_request_logout", 00:04:45.396 "iscsi_target_node_set_redirect", 00:04:45.396 "iscsi_target_node_set_auth", 00:04:45.396 "iscsi_target_node_add_lun", 00:04:45.396 "iscsi_get_stats", 00:04:45.396 "iscsi_get_connections", 00:04:45.396 "iscsi_portal_group_set_auth", 00:04:45.396 "iscsi_start_portal_group", 00:04:45.396 "iscsi_delete_portal_group", 00:04:45.396 "iscsi_create_portal_group", 00:04:45.396 "iscsi_get_portal_groups", 00:04:45.396 "iscsi_delete_target_node", 00:04:45.396 "iscsi_target_node_remove_pg_ig_maps", 00:04:45.396 "iscsi_target_node_add_pg_ig_maps", 00:04:45.396 "iscsi_create_target_node", 00:04:45.396 "iscsi_get_target_nodes", 00:04:45.396 "iscsi_delete_initiator_group", 00:04:45.396 "iscsi_initiator_group_remove_initiators", 00:04:45.396 "iscsi_initiator_group_add_initiators", 00:04:45.396 "iscsi_create_initiator_group", 00:04:45.396 "iscsi_get_initiator_groups", 00:04:45.396 "nvmf_set_crdt", 00:04:45.396 "nvmf_set_config", 00:04:45.396 "nvmf_set_max_subsystems", 00:04:45.396 "nvmf_stop_mdns_prr", 00:04:45.396 "nvmf_publish_mdns_prr", 00:04:45.396 "nvmf_subsystem_get_listeners", 00:04:45.396 "nvmf_subsystem_get_qpairs", 00:04:45.396 "nvmf_subsystem_get_controllers", 00:04:45.396 "nvmf_get_stats", 00:04:45.396 "nvmf_get_transports", 00:04:45.396 "nvmf_create_transport", 00:04:45.396 "nvmf_get_targets", 00:04:45.396 "nvmf_delete_target", 00:04:45.396 "nvmf_create_target", 00:04:45.396 "nvmf_subsystem_allow_any_host", 00:04:45.396 "nvmf_subsystem_set_keys", 00:04:45.396 "nvmf_subsystem_remove_host", 00:04:45.396 "nvmf_subsystem_add_host", 00:04:45.396 "nvmf_ns_remove_host", 00:04:45.396 "nvmf_ns_add_host", 00:04:45.396 "nvmf_subsystem_remove_ns", 00:04:45.396 "nvmf_subsystem_set_ns_ana_group", 00:04:45.396 "nvmf_subsystem_add_ns", 00:04:45.396 "nvmf_subsystem_listener_set_ana_state", 00:04:45.396 "nvmf_discovery_get_referrals", 00:04:45.396 "nvmf_discovery_remove_referral", 00:04:45.396 "nvmf_discovery_add_referral", 00:04:45.396 "nvmf_subsystem_remove_listener", 00:04:45.396 "nvmf_subsystem_add_listener", 00:04:45.396 "nvmf_delete_subsystem", 00:04:45.396 "nvmf_create_subsystem", 00:04:45.396 "nvmf_get_subsystems", 00:04:45.396 "env_dpdk_get_mem_stats", 00:04:45.396 "nbd_get_disks", 00:04:45.396 "nbd_stop_disk", 00:04:45.396 "nbd_start_disk", 00:04:45.396 "ublk_recover_disk", 00:04:45.396 "ublk_get_disks", 00:04:45.396 "ublk_stop_disk", 00:04:45.396 "ublk_start_disk", 00:04:45.396 "ublk_destroy_target", 00:04:45.396 "ublk_create_target", 00:04:45.396 "virtio_blk_create_transport", 00:04:45.396 "virtio_blk_get_transports", 00:04:45.396 "vhost_controller_set_coalescing", 00:04:45.396 "vhost_get_controllers", 00:04:45.396 "vhost_delete_controller", 00:04:45.396 "vhost_create_blk_controller", 00:04:45.396 "vhost_scsi_controller_remove_target", 00:04:45.396 "vhost_scsi_controller_add_target", 00:04:45.396 "vhost_start_scsi_controller", 00:04:45.396 "vhost_create_scsi_controller", 00:04:45.396 "thread_set_cpumask", 00:04:45.396 "scheduler_set_options", 00:04:45.396 "framework_get_governor", 00:04:45.396 "framework_get_scheduler", 00:04:45.396 "framework_set_scheduler", 00:04:45.396 "framework_get_reactors", 00:04:45.396 "thread_get_io_channels", 00:04:45.396 "thread_get_pollers", 00:04:45.396 "thread_get_stats", 00:04:45.396 "framework_monitor_context_switch", 00:04:45.396 "spdk_kill_instance", 00:04:45.396 "log_enable_timestamps", 00:04:45.396 "log_get_flags", 00:04:45.396 "log_clear_flag", 00:04:45.396 "log_set_flag", 00:04:45.396 "log_get_level", 00:04:45.396 "log_set_level", 00:04:45.396 "log_get_print_level", 00:04:45.396 "log_set_print_level", 00:04:45.396 "framework_enable_cpumask_locks", 00:04:45.396 "framework_disable_cpumask_locks", 00:04:45.396 "framework_wait_init", 00:04:45.396 "framework_start_init", 00:04:45.396 "scsi_get_devices", 00:04:45.396 "bdev_get_histogram", 00:04:45.396 "bdev_enable_histogram", 00:04:45.396 "bdev_set_qos_limit", 00:04:45.396 "bdev_set_qd_sampling_period", 00:04:45.396 "bdev_get_bdevs", 00:04:45.396 "bdev_reset_iostat", 00:04:45.396 "bdev_get_iostat", 00:04:45.396 "bdev_examine", 00:04:45.396 "bdev_wait_for_examine", 00:04:45.396 "bdev_set_options", 00:04:45.396 "accel_get_stats", 00:04:45.396 "accel_set_options", 00:04:45.396 "accel_set_driver", 00:04:45.396 "accel_crypto_key_destroy", 00:04:45.396 "accel_crypto_keys_get", 00:04:45.396 "accel_crypto_key_create", 00:04:45.396 "accel_assign_opc", 00:04:45.396 "accel_get_module_info", 00:04:45.396 "accel_get_opc_assignments", 00:04:45.396 "vmd_rescan", 00:04:45.396 "vmd_remove_device", 00:04:45.396 "vmd_enable", 00:04:45.396 "sock_get_default_impl", 00:04:45.396 "sock_set_default_impl", 00:04:45.396 "sock_impl_set_options", 00:04:45.396 "sock_impl_get_options", 00:04:45.396 "iobuf_get_stats", 00:04:45.396 "iobuf_set_options", 00:04:45.396 "keyring_get_keys", 00:04:45.396 "framework_get_pci_devices", 00:04:45.396 "framework_get_config", 00:04:45.396 "framework_get_subsystems", 00:04:45.396 "fsdev_set_opts", 00:04:45.396 "fsdev_get_opts", 00:04:45.396 "trace_get_info", 00:04:45.396 "trace_get_tpoint_group_mask", 00:04:45.396 "trace_disable_tpoint_group", 00:04:45.396 "trace_enable_tpoint_group", 00:04:45.396 "trace_clear_tpoint_mask", 00:04:45.396 "trace_set_tpoint_mask", 00:04:45.396 "notify_get_notifications", 00:04:45.396 "notify_get_types", 00:04:45.396 "spdk_get_version", 00:04:45.396 "rpc_get_methods" 00:04:45.396 ] 00:04:45.654 09:10:37 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:45.654 09:10:37 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:45.654 09:10:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.654 09:10:37 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:45.654 09:10:37 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58337 00:04:45.654 09:10:37 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58337 ']' 00:04:45.654 09:10:37 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58337 00:04:45.654 09:10:37 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:45.654 09:10:37 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:45.654 09:10:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58337 00:04:45.654 killing process with pid 58337 00:04:45.654 09:10:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:45.654 09:10:37 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:45.654 09:10:37 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58337' 00:04:45.654 09:10:37 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58337 00:04:45.654 09:10:37 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58337 00:04:47.024 ************************************ 00:04:47.024 END TEST spdkcli_tcp 00:04:47.024 ************************************ 00:04:47.024 00:04:47.024 real 0m2.761s 00:04:47.024 user 0m4.791s 00:04:47.024 sys 0m0.457s 00:04:47.024 09:10:38 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.024 09:10:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.024 09:10:38 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:47.024 09:10:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.024 09:10:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.024 09:10:38 -- common/autotest_common.sh@10 -- # set +x 00:04:47.024 ************************************ 00:04:47.024 START TEST dpdk_mem_utility 00:04:47.024 ************************************ 00:04:47.024 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:47.024 * Looking for test storage... 00:04:47.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.025 09:10:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:47.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.025 --rc genhtml_branch_coverage=1 00:04:47.025 --rc genhtml_function_coverage=1 00:04:47.025 --rc genhtml_legend=1 00:04:47.025 --rc geninfo_all_blocks=1 00:04:47.025 --rc geninfo_unexecuted_blocks=1 00:04:47.025 00:04:47.025 ' 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:47.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.025 --rc genhtml_branch_coverage=1 00:04:47.025 --rc genhtml_function_coverage=1 00:04:47.025 --rc genhtml_legend=1 00:04:47.025 --rc geninfo_all_blocks=1 00:04:47.025 --rc geninfo_unexecuted_blocks=1 00:04:47.025 00:04:47.025 ' 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:47.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.025 --rc genhtml_branch_coverage=1 00:04:47.025 --rc genhtml_function_coverage=1 00:04:47.025 --rc genhtml_legend=1 00:04:47.025 --rc geninfo_all_blocks=1 00:04:47.025 --rc geninfo_unexecuted_blocks=1 00:04:47.025 00:04:47.025 ' 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:47.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.025 --rc genhtml_branch_coverage=1 00:04:47.025 --rc genhtml_function_coverage=1 00:04:47.025 --rc genhtml_legend=1 00:04:47.025 --rc geninfo_all_blocks=1 00:04:47.025 --rc geninfo_unexecuted_blocks=1 00:04:47.025 00:04:47.025 ' 00:04:47.025 09:10:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:47.025 09:10:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58443 00:04:47.025 09:10:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58443 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58443 ']' 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.025 09:10:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.025 09:10:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.282 [2024-10-08 09:10:38.725313] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:47.282 [2024-10-08 09:10:38.725452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58443 ] 00:04:47.282 [2024-10-08 09:10:38.873339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.541 [2024-10-08 09:10:39.028374] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.110 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.110 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:48.110 09:10:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:48.110 09:10:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:48.110 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.110 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:48.110 { 00:04:48.110 "filename": "/tmp/spdk_mem_dump.txt" 00:04:48.110 } 00:04:48.110 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.110 09:10:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:48.110 DPDK memory size 866.000000 MiB in 1 heap(s) 00:04:48.110 1 heaps totaling size 866.000000 MiB 00:04:48.110 size: 866.000000 MiB heap id: 0 00:04:48.110 end heaps---------- 00:04:48.110 9 mempools totaling size 642.649841 MiB 00:04:48.110 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:48.110 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:48.110 size: 92.545471 MiB name: bdev_io_58443 00:04:48.110 size: 51.011292 MiB name: evtpool_58443 00:04:48.110 size: 50.003479 MiB name: msgpool_58443 00:04:48.110 size: 36.509338 MiB name: fsdev_io_58443 00:04:48.110 size: 21.763794 MiB name: PDU_Pool 00:04:48.110 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:48.110 size: 0.026123 MiB name: Session_Pool 00:04:48.110 end mempools------- 00:04:48.110 6 memzones totaling size 4.142822 MiB 00:04:48.110 size: 1.000366 MiB name: RG_ring_0_58443 00:04:48.110 size: 1.000366 MiB name: RG_ring_1_58443 00:04:48.110 size: 1.000366 MiB name: RG_ring_4_58443 00:04:48.110 size: 1.000366 MiB name: RG_ring_5_58443 00:04:48.110 size: 0.125366 MiB name: RG_ring_2_58443 00:04:48.110 size: 0.015991 MiB name: RG_ring_3_58443 00:04:48.110 end memzones------- 00:04:48.110 09:10:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:48.110 heap id: 0 total size: 866.000000 MiB number of busy elements: 308 number of free elements: 19 00:04:48.110 list of free elements. size: 19.915283 MiB 00:04:48.110 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:48.110 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:48.110 element at address: 0x200009600000 with size: 1.995972 MiB 00:04:48.110 element at address: 0x20000d800000 with size: 1.995972 MiB 00:04:48.110 element at address: 0x200007000000 with size: 1.991028 MiB 00:04:48.110 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:04:48.110 element at address: 0x20001c300040 with size: 0.999939 MiB 00:04:48.110 element at address: 0x20001c400000 with size: 0.999084 MiB 00:04:48.110 element at address: 0x200035000000 with size: 0.994324 MiB 00:04:48.110 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:04:48.110 element at address: 0x20001c700040 with size: 0.936401 MiB 00:04:48.110 element at address: 0x200000200000 with size: 0.832153 MiB 00:04:48.110 element at address: 0x20001de00000 with size: 0.562195 MiB 00:04:48.110 element at address: 0x200003e00000 with size: 0.492126 MiB 00:04:48.110 element at address: 0x20001c000000 with size: 0.487976 MiB 00:04:48.110 element at address: 0x20001c800000 with size: 0.485413 MiB 00:04:48.110 element at address: 0x200015e00000 with size: 0.443237 MiB 00:04:48.110 element at address: 0x20002b200000 with size: 0.390442 MiB 00:04:48.110 element at address: 0x200003a00000 with size: 0.353088 MiB 00:04:48.110 list of standard malloc elements. size: 199.286011 MiB 00:04:48.110 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:04:48.110 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:04:48.110 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:04:48.110 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:04:48.110 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:04:48.110 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:48.110 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:04:48.110 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:48.110 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:04:48.110 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:04:48.110 element at address: 0x200015dff040 with size: 0.000305 MiB 00:04:48.110 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:48.110 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003aff800 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003efef00 with size: 0.000244 MiB 00:04:48.110 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:48.110 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:04:48.110 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:04:48.110 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:04:48.110 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:04:48.110 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:04:48.110 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:04:48.110 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dff180 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dff280 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dff380 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dff480 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dff580 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dff680 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dff780 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dff880 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dff980 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015e71780 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015e71880 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015e71980 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015e72080 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015e72180 with size: 0.000244 MiB 00:04:48.111 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c07cec0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c07cfc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c07d0c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c07d1c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b264040 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:04:48.111 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:04:48.112 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:04:48.112 list of memzone associated elements. size: 646.798706 MiB 00:04:48.112 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:04:48.112 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:48.112 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:04:48.112 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:48.112 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:04:48.112 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58443_0 00:04:48.112 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:48.112 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58443_0 00:04:48.112 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:48.112 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58443_0 00:04:48.112 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:04:48.112 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58443_0 00:04:48.112 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:04:48.112 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:48.112 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:04:48.112 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:48.112 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:48.112 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58443 00:04:48.112 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:48.112 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58443 00:04:48.112 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:48.112 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58443 00:04:48.112 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:04:48.112 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:48.112 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:04:48.112 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:48.112 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:04:48.112 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:48.112 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:04:48.112 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:48.112 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:48.112 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58443 00:04:48.112 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:48.112 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58443 00:04:48.112 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:04:48.112 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58443 00:04:48.112 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:04:48.112 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58443 00:04:48.112 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:04:48.112 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58443 00:04:48.112 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:04:48.112 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58443 00:04:48.112 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:04:48.112 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:48.112 element at address: 0x200015e72280 with size: 0.500549 MiB 00:04:48.112 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:48.112 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:04:48.112 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:48.112 element at address: 0x200003a5e880 with size: 0.125549 MiB 00:04:48.112 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58443 00:04:48.112 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:04:48.112 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:48.112 element at address: 0x20002b264140 with size: 0.023804 MiB 00:04:48.112 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:48.112 element at address: 0x200003a5a640 with size: 0.016174 MiB 00:04:48.112 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58443 00:04:48.112 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:04:48.112 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:48.112 element at address: 0x2000002d6180 with size: 0.000366 MiB 00:04:48.112 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58443 00:04:48.112 element at address: 0x200003aff900 with size: 0.000366 MiB 00:04:48.112 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58443 00:04:48.112 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:04:48.112 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58443 00:04:48.112 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:04:48.112 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:48.112 09:10:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:48.112 09:10:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58443 00:04:48.112 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58443 ']' 00:04:48.112 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58443 00:04:48.112 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:48.112 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.112 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58443 00:04:48.112 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.112 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.112 killing process with pid 58443 00:04:48.112 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58443' 00:04:48.112 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58443 00:04:48.112 09:10:39 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58443 00:04:49.486 ************************************ 00:04:49.486 END TEST dpdk_mem_utility 00:04:49.486 ************************************ 00:04:49.486 00:04:49.486 real 0m2.483s 00:04:49.486 user 0m2.543s 00:04:49.486 sys 0m0.381s 00:04:49.486 09:10:40 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.486 09:10:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.486 09:10:41 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:49.486 09:10:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.486 09:10:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.486 09:10:41 -- common/autotest_common.sh@10 -- # set +x 00:04:49.486 ************************************ 00:04:49.486 START TEST event 00:04:49.486 ************************************ 00:04:49.486 09:10:41 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:49.486 * Looking for test storage... 00:04:49.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:49.486 09:10:41 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:49.486 09:10:41 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:49.486 09:10:41 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:49.744 09:10:41 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:49.744 09:10:41 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.744 09:10:41 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.744 09:10:41 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.744 09:10:41 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.744 09:10:41 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.744 09:10:41 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.744 09:10:41 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.744 09:10:41 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.744 09:10:41 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.744 09:10:41 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.744 09:10:41 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.744 09:10:41 event -- scripts/common.sh@344 -- # case "$op" in 00:04:49.744 09:10:41 event -- scripts/common.sh@345 -- # : 1 00:04:49.744 09:10:41 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.744 09:10:41 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.744 09:10:41 event -- scripts/common.sh@365 -- # decimal 1 00:04:49.744 09:10:41 event -- scripts/common.sh@353 -- # local d=1 00:04:49.744 09:10:41 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.744 09:10:41 event -- scripts/common.sh@355 -- # echo 1 00:04:49.744 09:10:41 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.744 09:10:41 event -- scripts/common.sh@366 -- # decimal 2 00:04:49.744 09:10:41 event -- scripts/common.sh@353 -- # local d=2 00:04:49.744 09:10:41 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.744 09:10:41 event -- scripts/common.sh@355 -- # echo 2 00:04:49.744 09:10:41 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.744 09:10:41 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.744 09:10:41 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.744 09:10:41 event -- scripts/common.sh@368 -- # return 0 00:04:49.744 09:10:41 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.744 09:10:41 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:49.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.744 --rc genhtml_branch_coverage=1 00:04:49.744 --rc genhtml_function_coverage=1 00:04:49.744 --rc genhtml_legend=1 00:04:49.744 --rc geninfo_all_blocks=1 00:04:49.744 --rc geninfo_unexecuted_blocks=1 00:04:49.744 00:04:49.744 ' 00:04:49.744 09:10:41 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:49.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.744 --rc genhtml_branch_coverage=1 00:04:49.744 --rc genhtml_function_coverage=1 00:04:49.744 --rc genhtml_legend=1 00:04:49.744 --rc geninfo_all_blocks=1 00:04:49.744 --rc geninfo_unexecuted_blocks=1 00:04:49.744 00:04:49.744 ' 00:04:49.744 09:10:41 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:49.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.744 --rc genhtml_branch_coverage=1 00:04:49.744 --rc genhtml_function_coverage=1 00:04:49.744 --rc genhtml_legend=1 00:04:49.744 --rc geninfo_all_blocks=1 00:04:49.744 --rc geninfo_unexecuted_blocks=1 00:04:49.744 00:04:49.744 ' 00:04:49.744 09:10:41 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:49.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.744 --rc genhtml_branch_coverage=1 00:04:49.744 --rc genhtml_function_coverage=1 00:04:49.744 --rc genhtml_legend=1 00:04:49.744 --rc geninfo_all_blocks=1 00:04:49.744 --rc geninfo_unexecuted_blocks=1 00:04:49.744 00:04:49.744 ' 00:04:49.744 09:10:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:49.744 09:10:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:49.744 09:10:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:49.744 09:10:41 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:49.744 09:10:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.744 09:10:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.744 ************************************ 00:04:49.744 START TEST event_perf 00:04:49.744 ************************************ 00:04:49.744 09:10:41 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:49.744 Running I/O for 1 seconds...[2024-10-08 09:10:41.217647] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:49.744 [2024-10-08 09:10:41.217761] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58534 ] 00:04:49.744 [2024-10-08 09:10:41.368260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:50.004 [2024-10-08 09:10:41.559123] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.004 Running I/O for 1 seconds...[2024-10-08 09:10:41.559512] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.004 [2024-10-08 09:10:41.559622] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.004 [2024-10-08 09:10:41.559641] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.417 00:04:51.417 lcore 0: 196206 00:04:51.417 lcore 1: 196204 00:04:51.417 lcore 2: 196206 00:04:51.417 lcore 3: 196206 00:04:51.417 done. 00:04:51.417 00:04:51.417 real 0m1.642s 00:04:51.417 user 0m4.435s 00:04:51.417 sys 0m0.086s 00:04:51.417 09:10:42 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.417 09:10:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.417 ************************************ 00:04:51.417 END TEST event_perf 00:04:51.417 ************************************ 00:04:51.417 09:10:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:51.417 09:10:42 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:51.417 09:10:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.417 09:10:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.417 ************************************ 00:04:51.417 START TEST event_reactor 00:04:51.417 ************************************ 00:04:51.417 09:10:42 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:51.417 [2024-10-08 09:10:42.900991] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:51.417 [2024-10-08 09:10:42.901103] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58574 ] 00:04:51.417 [2024-10-08 09:10:43.049560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.678 [2024-10-08 09:10:43.236375] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.060 test_start 00:04:53.060 oneshot 00:04:53.060 tick 100 00:04:53.060 tick 100 00:04:53.060 tick 250 00:04:53.060 tick 100 00:04:53.060 tick 100 00:04:53.060 tick 250 00:04:53.060 tick 100 00:04:53.060 tick 500 00:04:53.060 tick 100 00:04:53.060 tick 100 00:04:53.060 tick 250 00:04:53.060 tick 100 00:04:53.060 tick 100 00:04:53.060 test_end 00:04:53.060 00:04:53.060 real 0m1.582s 00:04:53.060 user 0m1.393s 00:04:53.060 sys 0m0.080s 00:04:53.060 09:10:44 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.060 ************************************ 00:04:53.060 END TEST event_reactor 00:04:53.060 ************************************ 00:04:53.060 09:10:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:53.060 09:10:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.060 09:10:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:53.060 09:10:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.060 09:10:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.060 ************************************ 00:04:53.060 START TEST event_reactor_perf 00:04:53.060 ************************************ 00:04:53.060 09:10:44 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.060 [2024-10-08 09:10:44.519361] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:53.060 [2024-10-08 09:10:44.519545] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58616 ] 00:04:53.060 [2024-10-08 09:10:44.670735] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.327 [2024-10-08 09:10:44.856082] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.718 test_start 00:04:54.718 test_end 00:04:54.718 Performance: 313831 events per second 00:04:54.718 00:04:54.718 real 0m1.636s 00:04:54.718 user 0m1.443s 00:04:54.718 sys 0m0.083s 00:04:54.718 09:10:46 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.718 09:10:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.718 ************************************ 00:04:54.718 END TEST event_reactor_perf 00:04:54.718 ************************************ 00:04:54.718 09:10:46 event -- event/event.sh@49 -- # uname -s 00:04:54.718 09:10:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:54.718 09:10:46 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:54.718 09:10:46 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.718 09:10:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.718 09:10:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.718 ************************************ 00:04:54.718 START TEST event_scheduler 00:04:54.718 ************************************ 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:54.718 * Looking for test storage... 00:04:54.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.718 09:10:46 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:54.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.718 --rc genhtml_branch_coverage=1 00:04:54.718 --rc genhtml_function_coverage=1 00:04:54.718 --rc genhtml_legend=1 00:04:54.718 --rc geninfo_all_blocks=1 00:04:54.718 --rc geninfo_unexecuted_blocks=1 00:04:54.718 00:04:54.718 ' 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:54.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.718 --rc genhtml_branch_coverage=1 00:04:54.718 --rc genhtml_function_coverage=1 00:04:54.718 --rc genhtml_legend=1 00:04:54.718 --rc geninfo_all_blocks=1 00:04:54.718 --rc geninfo_unexecuted_blocks=1 00:04:54.718 00:04:54.718 ' 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:54.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.718 --rc genhtml_branch_coverage=1 00:04:54.718 --rc genhtml_function_coverage=1 00:04:54.718 --rc genhtml_legend=1 00:04:54.718 --rc geninfo_all_blocks=1 00:04:54.718 --rc geninfo_unexecuted_blocks=1 00:04:54.718 00:04:54.718 ' 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:54.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.718 --rc genhtml_branch_coverage=1 00:04:54.718 --rc genhtml_function_coverage=1 00:04:54.718 --rc genhtml_legend=1 00:04:54.718 --rc geninfo_all_blocks=1 00:04:54.718 --rc geninfo_unexecuted_blocks=1 00:04:54.718 00:04:54.718 ' 00:04:54.718 09:10:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:54.718 09:10:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58686 00:04:54.718 09:10:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.718 09:10:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58686 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58686 ']' 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.718 09:10:46 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.719 09:10:46 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.719 09:10:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:54.719 09:10:46 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.719 09:10:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.719 [2024-10-08 09:10:46.368480] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:04:54.719 [2024-10-08 09:10:46.368615] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58686 ] 00:04:54.976 [2024-10-08 09:10:46.513750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.234 [2024-10-08 09:10:46.688462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.234 [2024-10-08 09:10:46.688531] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.234 [2024-10-08 09:10:46.689107] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.234 [2024-10-08 09:10:46.689130] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.802 09:10:47 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.802 09:10:47 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:55.802 09:10:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:55.802 09:10:47 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.802 09:10:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.802 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.802 POWER: Cannot set governor of lcore 0 to userspace 00:04:55.802 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.802 POWER: Cannot set governor of lcore 0 to performance 00:04:55.802 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.802 POWER: Cannot set governor of lcore 0 to userspace 00:04:55.802 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.802 POWER: Cannot set governor of lcore 0 to userspace 00:04:55.802 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:55.802 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:55.802 POWER: Unable to set Power Management Environment for lcore 0 00:04:55.802 [2024-10-08 09:10:47.218426] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:55.802 [2024-10-08 09:10:47.218444] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:55.802 [2024-10-08 09:10:47.218455] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:55.802 [2024-10-08 09:10:47.218471] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:55.802 [2024-10-08 09:10:47.218478] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:55.802 [2024-10-08 09:10:47.218495] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:55.802 09:10:47 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.802 09:10:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:55.802 09:10:47 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.802 09:10:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.802 [2024-10-08 09:10:47.439856] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:55.802 09:10:47 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.802 09:10:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:55.802 09:10:47 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.802 09:10:47 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.802 09:10:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.802 ************************************ 00:04:55.802 START TEST scheduler_create_thread 00:04:55.802 ************************************ 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.802 2 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.802 3 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.802 4 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.802 5 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.802 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.061 6 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.061 7 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.061 8 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.061 9 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.061 10 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.061 09:10:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.627 09:10:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.627 09:10:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:56.627 09:10:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.627 09:10:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.000 09:10:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.000 09:10:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:58.000 09:10:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:58.000 09:10:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.000 09:10:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.936 09:10:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.936 00:04:58.936 real 0m3.096s 00:04:58.936 user 0m0.012s 00:04:58.936 sys 0m0.006s 00:04:58.936 09:10:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.936 ************************************ 00:04:58.936 END TEST scheduler_create_thread 00:04:58.936 ************************************ 00:04:58.936 09:10:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.936 09:10:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:58.937 09:10:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58686 00:04:58.937 09:10:50 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58686 ']' 00:04:58.937 09:10:50 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58686 00:04:58.937 09:10:50 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:58.937 09:10:50 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.937 09:10:50 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58686 00:04:58.937 09:10:50 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:58.937 09:10:50 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:58.937 09:10:50 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58686' 00:04:58.937 killing process with pid 58686 00:04:58.937 09:10:50 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58686 00:04:58.937 09:10:50 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58686 00:04:59.503 [2024-10-08 09:10:50.922667] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:00.070 00:05:00.070 real 0m5.462s 00:05:00.070 user 0m10.362s 00:05:00.070 sys 0m0.328s 00:05:00.070 09:10:51 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.070 09:10:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.070 ************************************ 00:05:00.070 END TEST event_scheduler 00:05:00.070 ************************************ 00:05:00.070 09:10:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:00.070 09:10:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:00.070 09:10:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.070 09:10:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.070 09:10:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.070 ************************************ 00:05:00.070 START TEST app_repeat 00:05:00.070 ************************************ 00:05:00.070 09:10:51 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58798 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58798' 00:05:00.070 Process app_repeat pid: 58798 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.070 spdk_app_start Round 0 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:00.070 09:10:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58798 /var/tmp/spdk-nbd.sock 00:05:00.070 09:10:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58798 ']' 00:05:00.070 09:10:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.070 09:10:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.070 09:10:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.070 09:10:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.070 09:10:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.070 [2024-10-08 09:10:51.714161] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:00.070 [2024-10-08 09:10:51.714283] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58798 ] 00:05:00.329 [2024-10-08 09:10:51.862247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.587 [2024-10-08 09:10:52.062189] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.587 [2024-10-08 09:10:52.062437] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.154 09:10:52 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.154 09:10:52 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:01.154 09:10:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.436 Malloc0 00:05:01.436 09:10:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.693 Malloc1 00:05:01.693 09:10:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.693 09:10:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.693 /dev/nbd0 00:05:01.951 09:10:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.951 09:10:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.951 1+0 records in 00:05:01.951 1+0 records out 00:05:01.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245335 s, 16.7 MB/s 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:01.951 09:10:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:01.951 09:10:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.951 09:10:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.951 09:10:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.209 /dev/nbd1 00:05:02.209 09:10:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.209 09:10:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.209 1+0 records in 00:05:02.209 1+0 records out 00:05:02.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248968 s, 16.5 MB/s 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:02.209 09:10:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:02.209 09:10:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.209 09:10:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.209 09:10:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.209 09:10:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.209 09:10:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.469 09:10:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.469 { 00:05:02.469 "nbd_device": "/dev/nbd0", 00:05:02.469 "bdev_name": "Malloc0" 00:05:02.469 }, 00:05:02.469 { 00:05:02.469 "nbd_device": "/dev/nbd1", 00:05:02.469 "bdev_name": "Malloc1" 00:05:02.469 } 00:05:02.469 ]' 00:05:02.469 09:10:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.469 09:10:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.469 { 00:05:02.469 "nbd_device": "/dev/nbd0", 00:05:02.469 "bdev_name": "Malloc0" 00:05:02.469 }, 00:05:02.469 { 00:05:02.469 "nbd_device": "/dev/nbd1", 00:05:02.469 "bdev_name": "Malloc1" 00:05:02.469 } 00:05:02.469 ]' 00:05:02.469 09:10:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.469 /dev/nbd1' 00:05:02.469 09:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.469 09:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.469 /dev/nbd1' 00:05:02.469 09:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.469 09:10:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.469 09:10:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.469 09:10:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.469 09:10:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.469 09:10:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.469 09:10:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.470 256+0 records in 00:05:02.470 256+0 records out 00:05:02.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00785584 s, 133 MB/s 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.470 256+0 records in 00:05:02.470 256+0 records out 00:05:02.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204202 s, 51.4 MB/s 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.470 256+0 records in 00:05:02.470 256+0 records out 00:05:02.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194981 s, 53.8 MB/s 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.470 09:10:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.729 09:10:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.729 09:10:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.729 09:10:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.729 09:10:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.729 09:10:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.729 09:10:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.729 09:10:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.729 09:10:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.729 09:10:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.729 09:10:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.987 09:10:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.987 09:10:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.987 09:10:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.987 09:10:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.987 09:10:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.987 09:10:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.988 09:10:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.988 09:10:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.988 09:10:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.988 09:10:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.988 09:10:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.246 09:10:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.246 09:10:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.246 09:10:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.246 09:10:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.246 09:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.246 09:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.246 09:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.246 09:10:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.246 09:10:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.246 09:10:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.246 09:10:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.246 09:10:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.246 09:10:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:03.505 09:10:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.441 [2024-10-08 09:10:55.773943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.441 [2024-10-08 09:10:55.927278] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.441 [2024-10-08 09:10:55.927284] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.441 [2024-10-08 09:10:56.031707] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.441 [2024-10-08 09:10:56.031785] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.965 09:10:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.965 spdk_app_start Round 1 00:05:06.965 09:10:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:06.965 09:10:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58798 /var/tmp/spdk-nbd.sock 00:05:06.965 09:10:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58798 ']' 00:05:06.965 09:10:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.965 09:10:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.965 09:10:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.965 09:10:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.965 09:10:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.965 09:10:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.965 09:10:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:06.965 09:10:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.965 Malloc0 00:05:06.965 09:10:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.273 Malloc1 00:05:07.273 09:10:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.273 09:10:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.530 /dev/nbd0 00:05:07.530 09:10:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.530 09:10:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.530 1+0 records in 00:05:07.530 1+0 records out 00:05:07.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163408 s, 25.1 MB/s 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:07.530 09:10:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:07.530 09:10:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.530 09:10:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.530 09:10:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.530 /dev/nbd1 00:05:07.788 09:10:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.788 09:10:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.788 1+0 records in 00:05:07.788 1+0 records out 00:05:07.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313302 s, 13.1 MB/s 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:07.788 09:10:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:07.788 09:10:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.788 09:10:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.788 09:10:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.788 09:10:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.788 09:10:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.788 09:10:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.788 { 00:05:07.788 "nbd_device": "/dev/nbd0", 00:05:07.788 "bdev_name": "Malloc0" 00:05:07.788 }, 00:05:07.788 { 00:05:07.788 "nbd_device": "/dev/nbd1", 00:05:07.788 "bdev_name": "Malloc1" 00:05:07.788 } 00:05:07.788 ]' 00:05:07.788 09:10:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.788 { 00:05:07.788 "nbd_device": "/dev/nbd0", 00:05:07.788 "bdev_name": "Malloc0" 00:05:07.788 }, 00:05:07.788 { 00:05:07.788 "nbd_device": "/dev/nbd1", 00:05:07.788 "bdev_name": "Malloc1" 00:05:07.788 } 00:05:07.788 ]' 00:05:07.788 09:10:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.046 /dev/nbd1' 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.046 /dev/nbd1' 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.046 256+0 records in 00:05:08.046 256+0 records out 00:05:08.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100372 s, 104 MB/s 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.046 256+0 records in 00:05:08.046 256+0 records out 00:05:08.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162386 s, 64.6 MB/s 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.046 256+0 records in 00:05:08.046 256+0 records out 00:05:08.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018565 s, 56.5 MB/s 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.046 09:10:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.303 09:10:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:08.561 09:10:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.561 09:10:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.561 09:10:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.561 09:10:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.561 09:10:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.561 09:11:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.561 09:11:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.561 09:11:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.561 09:11:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.561 09:11:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.561 09:11:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.561 09:11:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.561 09:11:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.561 09:11:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.561 09:11:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.561 09:11:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.561 09:11:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.561 09:11:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.127 09:11:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.693 [2024-10-08 09:11:01.186779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.693 [2024-10-08 09:11:01.338429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.693 [2024-10-08 09:11:01.338462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.952 [2024-10-08 09:11:01.440013] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.952 [2024-10-08 09:11:01.440084] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.477 spdk_app_start Round 2 00:05:12.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.477 09:11:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.477 09:11:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:12.477 09:11:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58798 /var/tmp/spdk-nbd.sock 00:05:12.477 09:11:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58798 ']' 00:05:12.477 09:11:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.477 09:11:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.477 09:11:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.477 09:11:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.477 09:11:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.477 09:11:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.477 09:11:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:12.477 09:11:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.477 Malloc0 00:05:12.477 09:11:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.735 Malloc1 00:05:12.735 09:11:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.735 09:11:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.993 /dev/nbd0 00:05:12.993 09:11:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.993 09:11:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.993 1+0 records in 00:05:12.993 1+0 records out 00:05:12.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184739 s, 22.2 MB/s 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:12.993 09:11:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:12.993 09:11:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.993 09:11:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.993 09:11:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.250 /dev/nbd1 00:05:13.250 09:11:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.250 09:11:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.250 1+0 records in 00:05:13.250 1+0 records out 00:05:13.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199268 s, 20.6 MB/s 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:13.250 09:11:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:13.250 09:11:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.250 09:11:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.250 09:11:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.250 09:11:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.250 09:11:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.508 { 00:05:13.508 "nbd_device": "/dev/nbd0", 00:05:13.508 "bdev_name": "Malloc0" 00:05:13.508 }, 00:05:13.508 { 00:05:13.508 "nbd_device": "/dev/nbd1", 00:05:13.508 "bdev_name": "Malloc1" 00:05:13.508 } 00:05:13.508 ]' 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.508 { 00:05:13.508 "nbd_device": "/dev/nbd0", 00:05:13.508 "bdev_name": "Malloc0" 00:05:13.508 }, 00:05:13.508 { 00:05:13.508 "nbd_device": "/dev/nbd1", 00:05:13.508 "bdev_name": "Malloc1" 00:05:13.508 } 00:05:13.508 ]' 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.508 /dev/nbd1' 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.508 /dev/nbd1' 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.508 256+0 records in 00:05:13.508 256+0 records out 00:05:13.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00667908 s, 157 MB/s 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.508 256+0 records in 00:05:13.508 256+0 records out 00:05:13.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140091 s, 74.8 MB/s 00:05:13.508 09:11:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.508 256+0 records in 00:05:13.508 256+0 records out 00:05:13.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015959 s, 65.7 MB/s 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.508 09:11:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.509 09:11:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.509 09:11:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.767 09:11:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.025 09:11:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.025 09:11:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.025 09:11:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.025 09:11:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.025 09:11:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.025 09:11:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.025 09:11:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.025 09:11:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.025 09:11:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.025 09:11:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.025 09:11:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.025 09:11:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.025 09:11:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.284 09:11:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.247 [2024-10-08 09:11:06.583980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.247 [2024-10-08 09:11:06.734742] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.247 [2024-10-08 09:11:06.734879] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.247 [2024-10-08 09:11:06.839747] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.247 [2024-10-08 09:11:06.839804] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.774 09:11:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58798 /var/tmp/spdk-nbd.sock 00:05:17.774 09:11:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58798 ']' 00:05:17.774 09:11:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.774 09:11:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.774 09:11:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.774 09:11:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.774 09:11:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.774 09:11:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.775 09:11:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:17.775 09:11:09 event.app_repeat -- event/event.sh@39 -- # killprocess 58798 00:05:17.775 09:11:09 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58798 ']' 00:05:17.775 09:11:09 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58798 00:05:17.775 09:11:09 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:17.775 09:11:09 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.775 09:11:09 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58798 00:05:17.775 killing process with pid 58798 00:05:17.775 09:11:09 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.775 09:11:09 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.775 09:11:09 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58798' 00:05:17.775 09:11:09 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58798 00:05:17.775 09:11:09 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58798 00:05:18.370 spdk_app_start is called in Round 0. 00:05:18.370 Shutdown signal received, stop current app iteration 00:05:18.370 Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 reinitialization... 00:05:18.370 spdk_app_start is called in Round 1. 00:05:18.370 Shutdown signal received, stop current app iteration 00:05:18.370 Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 reinitialization... 00:05:18.370 spdk_app_start is called in Round 2. 00:05:18.370 Shutdown signal received, stop current app iteration 00:05:18.370 Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 reinitialization... 00:05:18.370 spdk_app_start is called in Round 3. 00:05:18.370 Shutdown signal received, stop current app iteration 00:05:18.370 09:11:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:18.370 09:11:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:18.370 00:05:18.370 real 0m18.108s 00:05:18.370 user 0m38.991s 00:05:18.370 sys 0m2.173s 00:05:18.370 ************************************ 00:05:18.370 END TEST app_repeat 00:05:18.370 09:11:09 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.370 09:11:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.370 ************************************ 00:05:18.370 09:11:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:18.370 09:11:09 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:18.370 09:11:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.370 09:11:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.370 09:11:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.370 ************************************ 00:05:18.370 START TEST cpu_locks 00:05:18.370 ************************************ 00:05:18.370 09:11:09 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:18.370 * Looking for test storage... 00:05:18.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:18.370 09:11:09 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.370 09:11:09 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.370 09:11:09 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.370 09:11:09 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.370 09:11:09 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:18.370 09:11:09 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.370 09:11:09 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.370 --rc genhtml_branch_coverage=1 00:05:18.370 --rc genhtml_function_coverage=1 00:05:18.370 --rc genhtml_legend=1 00:05:18.370 --rc geninfo_all_blocks=1 00:05:18.370 --rc geninfo_unexecuted_blocks=1 00:05:18.370 00:05:18.370 ' 00:05:18.370 09:11:09 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.370 --rc genhtml_branch_coverage=1 00:05:18.370 --rc genhtml_function_coverage=1 00:05:18.370 --rc genhtml_legend=1 00:05:18.370 --rc geninfo_all_blocks=1 00:05:18.370 --rc geninfo_unexecuted_blocks=1 00:05:18.370 00:05:18.370 ' 00:05:18.370 09:11:09 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.370 --rc genhtml_branch_coverage=1 00:05:18.370 --rc genhtml_function_coverage=1 00:05:18.370 --rc genhtml_legend=1 00:05:18.370 --rc geninfo_all_blocks=1 00:05:18.370 --rc geninfo_unexecuted_blocks=1 00:05:18.370 00:05:18.370 ' 00:05:18.370 09:11:09 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.370 --rc genhtml_branch_coverage=1 00:05:18.370 --rc genhtml_function_coverage=1 00:05:18.371 --rc genhtml_legend=1 00:05:18.371 --rc geninfo_all_blocks=1 00:05:18.371 --rc geninfo_unexecuted_blocks=1 00:05:18.371 00:05:18.371 ' 00:05:18.371 09:11:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:18.371 09:11:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:18.371 09:11:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:18.371 09:11:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:18.371 09:11:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.371 09:11:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.371 09:11:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.371 ************************************ 00:05:18.371 START TEST default_locks 00:05:18.371 ************************************ 00:05:18.371 09:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:18.371 09:11:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59234 00:05:18.371 09:11:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59234 00:05:18.371 09:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59234 ']' 00:05:18.371 09:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.371 09:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.371 09:11:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.371 09:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.371 09:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.371 09:11:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.371 [2024-10-08 09:11:10.037114] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:18.371 [2024-10-08 09:11:10.037246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59234 ] 00:05:18.629 [2024-10-08 09:11:10.185510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.889 [2024-10-08 09:11:10.338498] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.457 09:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.457 09:11:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:19.457 09:11:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59234 00:05:19.457 09:11:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59234 00:05:19.457 09:11:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.457 09:11:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59234 00:05:19.457 09:11:11 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59234 ']' 00:05:19.457 09:11:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59234 00:05:19.457 09:11:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:19.457 09:11:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.457 09:11:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59234 00:05:19.457 09:11:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.457 09:11:11 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.457 killing process with pid 59234 00:05:19.457 09:11:11 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59234' 00:05:19.457 09:11:11 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59234 00:05:19.457 09:11:11 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59234 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59234 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59234 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59234 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59234 ']' 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.838 ERROR: process (pid: 59234) is no longer running 00:05:20.838 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59234) - No such process 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:20.838 09:11:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:20.839 09:11:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.839 09:11:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.839 09:11:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.839 00:05:20.839 real 0m2.427s 00:05:20.839 user 0m2.437s 00:05:20.839 sys 0m0.429s 00:05:20.839 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.839 09:11:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.839 ************************************ 00:05:20.839 END TEST default_locks 00:05:20.839 ************************************ 00:05:20.839 09:11:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:20.839 09:11:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.839 09:11:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.839 09:11:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.839 ************************************ 00:05:20.839 START TEST default_locks_via_rpc 00:05:20.839 ************************************ 00:05:20.839 09:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:20.839 09:11:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59287 00:05:20.839 09:11:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59287 00:05:20.839 09:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59287 ']' 00:05:20.839 09:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.839 09:11:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.839 09:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.839 09:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.839 09:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.839 09:11:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.839 [2024-10-08 09:11:12.504884] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:20.839 [2024-10-08 09:11:12.505007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59287 ] 00:05:21.097 [2024-10-08 09:11:12.652216] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.355 [2024-10-08 09:11:12.805400] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59287 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59287 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59287 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59287 ']' 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59287 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59287 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.921 killing process with pid 59287 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59287' 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59287 00:05:21.921 09:11:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59287 00:05:23.293 00:05:23.293 real 0m2.427s 00:05:23.293 user 0m2.383s 00:05:23.293 sys 0m0.467s 00:05:23.293 09:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.293 09:11:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.293 ************************************ 00:05:23.293 END TEST default_locks_via_rpc 00:05:23.293 ************************************ 00:05:23.293 09:11:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:23.293 09:11:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.293 09:11:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.293 09:11:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.293 ************************************ 00:05:23.293 START TEST non_locking_app_on_locked_coremask 00:05:23.293 ************************************ 00:05:23.293 09:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:23.293 09:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59339 00:05:23.293 09:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59339 /var/tmp/spdk.sock 00:05:23.293 09:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.293 09:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59339 ']' 00:05:23.293 09:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.293 09:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.293 09:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.293 09:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.293 09:11:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.293 [2024-10-08 09:11:14.967444] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:23.293 [2024-10-08 09:11:14.967547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59339 ] 00:05:23.554 [2024-10-08 09:11:15.106460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.815 [2024-10-08 09:11:15.259418] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.435 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.435 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:24.435 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59355 00:05:24.435 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59355 /var/tmp/spdk2.sock 00:05:24.435 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59355 ']' 00:05:24.435 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.435 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.435 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.435 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.435 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.435 09:11:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:24.435 [2024-10-08 09:11:15.888983] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:24.435 [2024-10-08 09:11:15.889087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59355 ] 00:05:24.435 [2024-10-08 09:11:16.036028] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.435 [2024-10-08 09:11:16.036077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.695 [2024-10-08 09:11:16.347948] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.635 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.635 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:25.635 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59339 00:05:25.635 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.635 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59339 00:05:25.895 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59339 00:05:25.895 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59339 ']' 00:05:25.895 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59339 00:05:25.895 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:25.895 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.895 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59339 00:05:26.157 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.157 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.157 killing process with pid 59339 00:05:26.157 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59339' 00:05:26.157 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59339 00:05:26.157 09:11:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59339 00:05:28.695 09:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59355 00:05:28.695 09:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59355 ']' 00:05:28.695 09:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59355 00:05:28.695 09:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:28.695 09:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.695 09:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59355 00:05:28.695 09:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.695 killing process with pid 59355 00:05:28.695 09:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.695 09:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59355' 00:05:28.695 09:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59355 00:05:28.695 09:11:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59355 00:05:30.186 00:05:30.186 real 0m6.667s 00:05:30.186 user 0m6.962s 00:05:30.186 sys 0m0.789s 00:05:30.186 09:11:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.186 09:11:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.186 ************************************ 00:05:30.186 END TEST non_locking_app_on_locked_coremask 00:05:30.186 ************************************ 00:05:30.186 09:11:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:30.186 09:11:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.186 09:11:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.186 09:11:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.186 ************************************ 00:05:30.186 START TEST locking_app_on_unlocked_coremask 00:05:30.186 ************************************ 00:05:30.186 09:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:30.186 09:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59457 00:05:30.186 09:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59457 /var/tmp/spdk.sock 00:05:30.186 09:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:30.186 09:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59457 ']' 00:05:30.186 09:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.186 09:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.186 09:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.186 09:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.186 09:11:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.186 [2024-10-08 09:11:21.670820] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:30.186 [2024-10-08 09:11:21.670920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59457 ] 00:05:30.186 [2024-10-08 09:11:21.813262] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.186 [2024-10-08 09:11:21.813317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.446 [2024-10-08 09:11:21.967322] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.016 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.016 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:31.016 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59473 00:05:31.016 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59473 /var/tmp/spdk2.sock 00:05:31.016 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59473 ']' 00:05:31.016 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.016 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.016 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:31.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.016 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.016 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.016 09:11:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.016 [2024-10-08 09:11:22.586722] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:31.016 [2024-10-08 09:11:22.586847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59473 ] 00:05:31.276 [2024-10-08 09:11:22.733346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.535 [2024-10-08 09:11:23.052045] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.476 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.476 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:32.476 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59473 00:05:32.476 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59473 00:05:32.476 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.736 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59457 00:05:32.737 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59457 ']' 00:05:32.737 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59457 00:05:32.998 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:32.998 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.998 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59457 00:05:32.998 killing process with pid 59457 00:05:32.998 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.998 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.998 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59457' 00:05:32.998 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59457 00:05:32.998 09:11:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59457 00:05:35.538 09:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59473 00:05:35.538 09:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59473 ']' 00:05:35.538 09:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59473 00:05:35.538 09:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:35.538 09:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.538 09:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59473 00:05:35.538 killing process with pid 59473 00:05:35.538 09:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.538 09:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.538 09:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59473' 00:05:35.538 09:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59473 00:05:35.538 09:11:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59473 00:05:36.917 ************************************ 00:05:36.917 00:05:36.917 real 0m6.818s 00:05:36.917 user 0m7.047s 00:05:36.917 sys 0m0.907s 00:05:36.917 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.917 09:11:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.917 END TEST locking_app_on_unlocked_coremask 00:05:36.917 ************************************ 00:05:36.917 09:11:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:36.917 09:11:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.917 09:11:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.917 09:11:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.917 ************************************ 00:05:36.917 START TEST locking_app_on_locked_coremask 00:05:36.917 ************************************ 00:05:36.917 09:11:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:36.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.917 09:11:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59575 00:05:36.917 09:11:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59575 /var/tmp/spdk.sock 00:05:36.917 09:11:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59575 ']' 00:05:36.917 09:11:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.917 09:11:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.917 09:11:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.917 09:11:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.917 09:11:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.917 09:11:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.917 [2024-10-08 09:11:28.524494] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:36.917 [2024-10-08 09:11:28.524731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59575 ] 00:05:37.175 [2024-10-08 09:11:28.666097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.175 [2024-10-08 09:11:28.847961] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59586 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59586 /var/tmp/spdk2.sock 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59586 /var/tmp/spdk2.sock 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59586 /var/tmp/spdk2.sock 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59586 ']' 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.742 09:11:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.000 [2024-10-08 09:11:29.450294] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:38.000 [2024-10-08 09:11:29.450456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59586 ] 00:05:38.000 [2024-10-08 09:11:29.605113] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59575 has claimed it. 00:05:38.000 [2024-10-08 09:11:29.605179] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.568 ERROR: process (pid: 59586) is no longer running 00:05:38.568 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59586) - No such process 00:05:38.568 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.568 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:38.568 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:38.568 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.568 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.568 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.568 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59575 00:05:38.568 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.568 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59575 00:05:38.827 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59575 00:05:38.827 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59575 ']' 00:05:38.827 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59575 00:05:38.827 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:38.827 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.827 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59575 00:05:38.827 killing process with pid 59575 00:05:38.827 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.827 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.827 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59575' 00:05:38.827 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59575 00:05:38.827 09:11:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59575 00:05:40.210 ************************************ 00:05:40.210 END TEST locking_app_on_locked_coremask 00:05:40.210 ************************************ 00:05:40.210 00:05:40.210 real 0m3.188s 00:05:40.210 user 0m3.393s 00:05:40.210 sys 0m0.548s 00:05:40.210 09:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.210 09:11:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.210 09:11:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:40.210 09:11:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.210 09:11:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.210 09:11:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.210 ************************************ 00:05:40.210 START TEST locking_overlapped_coremask 00:05:40.210 ************************************ 00:05:40.210 09:11:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:40.210 09:11:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59644 00:05:40.210 09:11:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59644 /var/tmp/spdk.sock 00:05:40.210 09:11:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59644 ']' 00:05:40.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.210 09:11:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:40.210 09:11:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.210 09:11:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.210 09:11:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.210 09:11:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.210 09:11:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.210 [2024-10-08 09:11:31.774175] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:40.210 [2024-10-08 09:11:31.774313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59644 ] 00:05:40.468 [2024-10-08 09:11:31.926016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.468 [2024-10-08 09:11:32.083422] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.468 [2024-10-08 09:11:32.083911] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.468 [2024-10-08 09:11:32.083939] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59661 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59661 /var/tmp/spdk2.sock 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59661 /var/tmp/spdk2.sock 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:41.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59661 /var/tmp/spdk2.sock 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59661 ']' 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.033 09:11:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.033 [2024-10-08 09:11:32.646148] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:41.033 [2024-10-08 09:11:32.646249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59661 ] 00:05:41.290 [2024-10-08 09:11:32.797664] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59644 has claimed it. 00:05:41.290 [2024-10-08 09:11:32.797868] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:41.876 ERROR: process (pid: 59661) is no longer running 00:05:41.876 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59661) - No such process 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59644 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59644 ']' 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59644 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59644 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59644' 00:05:41.876 killing process with pid 59644 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59644 00:05:41.876 09:11:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59644 00:05:43.245 00:05:43.245 real 0m2.911s 00:05:43.245 user 0m7.621s 00:05:43.245 sys 0m0.405s 00:05:43.245 09:11:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.245 09:11:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.245 ************************************ 00:05:43.245 END TEST locking_overlapped_coremask 00:05:43.245 ************************************ 00:05:43.245 09:11:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:43.245 09:11:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.245 09:11:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.245 09:11:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.245 ************************************ 00:05:43.245 START TEST locking_overlapped_coremask_via_rpc 00:05:43.245 ************************************ 00:05:43.245 09:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:43.245 09:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59714 00:05:43.245 09:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59714 /var/tmp/spdk.sock 00:05:43.245 09:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59714 ']' 00:05:43.245 09:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:43.245 09:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.245 09:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.245 09:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.245 09:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.245 09:11:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.245 [2024-10-08 09:11:34.724721] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:43.245 [2024-10-08 09:11:34.724843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59714 ] 00:05:43.246 [2024-10-08 09:11:34.872990] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.246 [2024-10-08 09:11:34.873054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.503 [2024-10-08 09:11:35.026344] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.503 [2024-10-08 09:11:35.026986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.503 [2024-10-08 09:11:35.026999] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.066 09:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.066 09:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:44.066 09:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59728 00:05:44.066 09:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:44.066 09:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59728 /var/tmp/spdk2.sock 00:05:44.066 09:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59728 ']' 00:05:44.066 09:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.066 09:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.066 09:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.066 09:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.066 09:11:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.066 [2024-10-08 09:11:35.579053] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:44.066 [2024-10-08 09:11:35.579167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59728 ] 00:05:44.066 [2024-10-08 09:11:35.734583] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.066 [2024-10-08 09:11:35.734643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.629 [2024-10-08 09:11:36.111415] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.629 [2024-10-08 09:11:36.115499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.629 [2024-10-08 09:11:36.115517] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.561 [2024-10-08 09:11:37.168575] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59714 has claimed it. 00:05:45.561 request: 00:05:45.561 { 00:05:45.561 "method": "framework_enable_cpumask_locks", 00:05:45.561 "req_id": 1 00:05:45.561 } 00:05:45.561 Got JSON-RPC error response 00:05:45.561 response: 00:05:45.561 { 00:05:45.561 "code": -32603, 00:05:45.561 "message": "Failed to claim CPU core: 2" 00:05:45.561 } 00:05:45.561 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:45.562 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:45.562 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.562 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.562 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.562 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59714 /var/tmp/spdk.sock 00:05:45.562 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59714 ']' 00:05:45.562 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.562 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.562 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.562 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.562 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.829 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.829 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:45.829 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59728 /var/tmp/spdk2.sock 00:05:45.829 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59728 ']' 00:05:45.829 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.829 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.829 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.829 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.829 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.114 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.114 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.114 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:46.114 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.114 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.114 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.114 00:05:46.114 real 0m3.010s 00:05:46.114 user 0m1.228s 00:05:46.114 sys 0m0.123s 00:05:46.114 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.114 09:11:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.114 ************************************ 00:05:46.114 END TEST locking_overlapped_coremask_via_rpc 00:05:46.114 ************************************ 00:05:46.114 09:11:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:46.114 09:11:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59714 ]] 00:05:46.114 09:11:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59714 00:05:46.114 09:11:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59714 ']' 00:05:46.114 09:11:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59714 00:05:46.114 09:11:37 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:46.114 09:11:37 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.114 09:11:37 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59714 00:05:46.114 09:11:37 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.114 09:11:37 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.114 09:11:37 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59714' 00:05:46.114 killing process with pid 59714 00:05:46.114 09:11:37 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59714 00:05:46.114 09:11:37 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59714 00:05:47.483 09:11:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59728 ]] 00:05:47.483 09:11:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59728 00:05:47.483 09:11:39 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59728 ']' 00:05:47.483 09:11:39 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59728 00:05:47.483 09:11:39 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:47.483 09:11:39 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.483 09:11:39 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59728 00:05:47.483 09:11:39 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:47.483 09:11:39 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:47.483 killing process with pid 59728 00:05:47.483 09:11:39 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59728' 00:05:47.483 09:11:39 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59728 00:05:47.483 09:11:39 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59728 00:05:48.855 09:11:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.855 09:11:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:48.855 09:11:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59714 ]] 00:05:48.855 09:11:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59714 00:05:48.855 09:11:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59714 ']' 00:05:48.855 09:11:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59714 00:05:48.855 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59714) - No such process 00:05:48.855 Process with pid 59714 is not found 00:05:48.855 09:11:40 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59714 is not found' 00:05:48.855 09:11:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59728 ]] 00:05:48.855 09:11:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59728 00:05:48.855 09:11:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59728 ']' 00:05:48.855 09:11:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59728 00:05:48.855 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59728) - No such process 00:05:48.855 Process with pid 59728 is not found 00:05:48.855 09:11:40 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59728 is not found' 00:05:48.855 09:11:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.855 00:05:48.855 real 0m30.544s 00:05:48.855 user 0m52.066s 00:05:48.855 sys 0m4.454s 00:05:48.855 09:11:40 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.855 09:11:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.855 ************************************ 00:05:48.855 END TEST cpu_locks 00:05:48.855 ************************************ 00:05:48.855 00:05:48.855 real 0m59.360s 00:05:48.855 user 1m48.859s 00:05:48.855 sys 0m7.424s 00:05:48.855 09:11:40 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.855 09:11:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.855 ************************************ 00:05:48.855 END TEST event 00:05:48.855 ************************************ 00:05:48.855 09:11:40 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:48.855 09:11:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.855 09:11:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.855 09:11:40 -- common/autotest_common.sh@10 -- # set +x 00:05:48.855 ************************************ 00:05:48.855 START TEST thread 00:05:48.855 ************************************ 00:05:48.855 09:11:40 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:48.855 * Looking for test storage... 00:05:48.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:48.855 09:11:40 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:48.855 09:11:40 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:48.855 09:11:40 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.113 09:11:40 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.113 09:11:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.113 09:11:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.113 09:11:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.113 09:11:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.113 09:11:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.113 09:11:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.113 09:11:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.113 09:11:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.113 09:11:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.113 09:11:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.113 09:11:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.113 09:11:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:49.113 09:11:40 thread -- scripts/common.sh@345 -- # : 1 00:05:49.113 09:11:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.113 09:11:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.113 09:11:40 thread -- scripts/common.sh@365 -- # decimal 1 00:05:49.113 09:11:40 thread -- scripts/common.sh@353 -- # local d=1 00:05:49.113 09:11:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.113 09:11:40 thread -- scripts/common.sh@355 -- # echo 1 00:05:49.113 09:11:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.113 09:11:40 thread -- scripts/common.sh@366 -- # decimal 2 00:05:49.113 09:11:40 thread -- scripts/common.sh@353 -- # local d=2 00:05:49.113 09:11:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.113 09:11:40 thread -- scripts/common.sh@355 -- # echo 2 00:05:49.113 09:11:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.113 09:11:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.113 09:11:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.113 09:11:40 thread -- scripts/common.sh@368 -- # return 0 00:05:49.113 09:11:40 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.113 09:11:40 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.113 --rc genhtml_branch_coverage=1 00:05:49.113 --rc genhtml_function_coverage=1 00:05:49.113 --rc genhtml_legend=1 00:05:49.113 --rc geninfo_all_blocks=1 00:05:49.113 --rc geninfo_unexecuted_blocks=1 00:05:49.113 00:05:49.113 ' 00:05:49.113 09:11:40 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.113 --rc genhtml_branch_coverage=1 00:05:49.113 --rc genhtml_function_coverage=1 00:05:49.113 --rc genhtml_legend=1 00:05:49.113 --rc geninfo_all_blocks=1 00:05:49.113 --rc geninfo_unexecuted_blocks=1 00:05:49.113 00:05:49.113 ' 00:05:49.113 09:11:40 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.113 --rc genhtml_branch_coverage=1 00:05:49.113 --rc genhtml_function_coverage=1 00:05:49.113 --rc genhtml_legend=1 00:05:49.113 --rc geninfo_all_blocks=1 00:05:49.113 --rc geninfo_unexecuted_blocks=1 00:05:49.113 00:05:49.113 ' 00:05:49.113 09:11:40 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.113 --rc genhtml_branch_coverage=1 00:05:49.113 --rc genhtml_function_coverage=1 00:05:49.113 --rc genhtml_legend=1 00:05:49.113 --rc geninfo_all_blocks=1 00:05:49.113 --rc geninfo_unexecuted_blocks=1 00:05:49.113 00:05:49.113 ' 00:05:49.113 09:11:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.113 09:11:40 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:49.113 09:11:40 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.113 09:11:40 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.113 ************************************ 00:05:49.113 START TEST thread_poller_perf 00:05:49.113 ************************************ 00:05:49.113 09:11:40 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.113 [2024-10-08 09:11:40.611135] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:49.113 [2024-10-08 09:11:40.611257] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59888 ] 00:05:49.113 [2024-10-08 09:11:40.768315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.370 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:49.370 [2024-10-08 09:11:40.967272] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.776 [2024-10-08T09:11:42.459Z] ====================================== 00:05:50.776 [2024-10-08T09:11:42.459Z] busy:2610790198 (cyc) 00:05:50.776 [2024-10-08T09:11:42.459Z] total_run_count: 290000 00:05:50.776 [2024-10-08T09:11:42.459Z] tsc_hz: 2600000000 (cyc) 00:05:50.776 [2024-10-08T09:11:42.459Z] ====================================== 00:05:50.776 [2024-10-08T09:11:42.459Z] poller_cost: 9002 (cyc), 3462 (nsec) 00:05:50.776 00:05:50.776 real 0m1.680s 00:05:50.776 user 0m1.479s 00:05:50.776 sys 0m0.092s 00:05:50.776 09:11:42 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.776 09:11:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.776 ************************************ 00:05:50.776 END TEST thread_poller_perf 00:05:50.776 ************************************ 00:05:50.776 09:11:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:50.776 09:11:42 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:50.776 09:11:42 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.776 09:11:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.776 ************************************ 00:05:50.776 START TEST thread_poller_perf 00:05:50.776 ************************************ 00:05:50.776 09:11:42 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:50.776 [2024-10-08 09:11:42.324205] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:50.776 [2024-10-08 09:11:42.324333] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59924 ] 00:05:51.033 [2024-10-08 09:11:42.473213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.033 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:51.033 [2024-10-08 09:11:42.657543] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.404 [2024-10-08T09:11:44.087Z] ====================================== 00:05:52.404 [2024-10-08T09:11:44.087Z] busy:2603358182 (cyc) 00:05:52.404 [2024-10-08T09:11:44.087Z] total_run_count: 3935000 00:05:52.404 [2024-10-08T09:11:44.087Z] tsc_hz: 2600000000 (cyc) 00:05:52.404 [2024-10-08T09:11:44.087Z] ====================================== 00:05:52.404 [2024-10-08T09:11:44.087Z] poller_cost: 661 (cyc), 254 (nsec) 00:05:52.404 00:05:52.404 real 0m1.634s 00:05:52.404 user 0m1.447s 00:05:52.404 sys 0m0.079s 00:05:52.404 09:11:43 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.404 09:11:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.404 ************************************ 00:05:52.404 END TEST thread_poller_perf 00:05:52.404 ************************************ 00:05:52.404 09:11:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:52.404 00:05:52.404 real 0m3.525s 00:05:52.404 user 0m3.041s 00:05:52.404 sys 0m0.271s 00:05:52.404 09:11:43 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.404 09:11:43 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.404 ************************************ 00:05:52.404 END TEST thread 00:05:52.404 ************************************ 00:05:52.404 09:11:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:52.404 09:11:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:52.404 09:11:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.404 09:11:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.404 09:11:43 -- common/autotest_common.sh@10 -- # set +x 00:05:52.404 ************************************ 00:05:52.404 START TEST app_cmdline 00:05:52.404 ************************************ 00:05:52.404 09:11:43 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:52.404 * Looking for test storage... 00:05:52.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:52.404 09:11:44 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:52.404 09:11:44 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:52.404 09:11:44 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:52.662 09:11:44 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.662 09:11:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:52.662 09:11:44 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.662 09:11:44 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:52.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.662 --rc genhtml_branch_coverage=1 00:05:52.662 --rc genhtml_function_coverage=1 00:05:52.662 --rc genhtml_legend=1 00:05:52.662 --rc geninfo_all_blocks=1 00:05:52.662 --rc geninfo_unexecuted_blocks=1 00:05:52.662 00:05:52.662 ' 00:05:52.662 09:11:44 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:52.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.662 --rc genhtml_branch_coverage=1 00:05:52.662 --rc genhtml_function_coverage=1 00:05:52.662 --rc genhtml_legend=1 00:05:52.662 --rc geninfo_all_blocks=1 00:05:52.662 --rc geninfo_unexecuted_blocks=1 00:05:52.662 00:05:52.662 ' 00:05:52.662 09:11:44 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:52.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.662 --rc genhtml_branch_coverage=1 00:05:52.662 --rc genhtml_function_coverage=1 00:05:52.662 --rc genhtml_legend=1 00:05:52.662 --rc geninfo_all_blocks=1 00:05:52.662 --rc geninfo_unexecuted_blocks=1 00:05:52.662 00:05:52.662 ' 00:05:52.662 09:11:44 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:52.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.662 --rc genhtml_branch_coverage=1 00:05:52.662 --rc genhtml_function_coverage=1 00:05:52.662 --rc genhtml_legend=1 00:05:52.662 --rc geninfo_all_blocks=1 00:05:52.662 --rc geninfo_unexecuted_blocks=1 00:05:52.662 00:05:52.662 ' 00:05:52.662 09:11:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:52.662 09:11:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60013 00:05:52.662 09:11:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60013 00:05:52.662 09:11:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:52.662 09:11:44 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60013 ']' 00:05:52.662 09:11:44 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.662 09:11:44 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.662 09:11:44 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.662 09:11:44 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.662 09:11:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:52.662 [2024-10-08 09:11:44.207673] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:52.662 [2024-10-08 09:11:44.207801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60013 ] 00:05:52.920 [2024-10-08 09:11:44.356615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.920 [2024-10-08 09:11:44.540472] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.484 09:11:45 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.484 09:11:45 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:53.484 09:11:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:53.741 { 00:05:53.741 "version": "SPDK v25.01-pre git sha1 91fca59bc", 00:05:53.741 "fields": { 00:05:53.741 "major": 25, 00:05:53.741 "minor": 1, 00:05:53.741 "patch": 0, 00:05:53.741 "suffix": "-pre", 00:05:53.741 "commit": "91fca59bc" 00:05:53.741 } 00:05:53.741 } 00:05:53.741 09:11:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:53.741 09:11:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:53.741 09:11:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:53.741 09:11:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:53.741 09:11:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:53.741 09:11:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:53.741 09:11:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.741 09:11:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:53.741 09:11:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:53.741 09:11:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:53.741 09:11:45 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.998 request: 00:05:53.998 { 00:05:53.998 "method": "env_dpdk_get_mem_stats", 00:05:53.998 "req_id": 1 00:05:53.998 } 00:05:53.998 Got JSON-RPC error response 00:05:53.998 response: 00:05:53.998 { 00:05:53.998 "code": -32601, 00:05:53.998 "message": "Method not found" 00:05:53.998 } 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.998 09:11:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60013 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60013 ']' 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60013 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60013 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.998 killing process with pid 60013 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60013' 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@969 -- # kill 60013 00:05:53.998 09:11:45 app_cmdline -- common/autotest_common.sh@974 -- # wait 60013 00:05:55.894 00:05:55.894 real 0m3.203s 00:05:55.894 user 0m3.464s 00:05:55.894 sys 0m0.448s 00:05:55.894 09:11:47 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.894 09:11:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.894 ************************************ 00:05:55.894 END TEST app_cmdline 00:05:55.894 ************************************ 00:05:55.894 09:11:47 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:55.894 09:11:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.894 09:11:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.894 09:11:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.894 ************************************ 00:05:55.894 START TEST version 00:05:55.894 ************************************ 00:05:55.894 09:11:47 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:55.894 * Looking for test storage... 00:05:55.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:55.894 09:11:47 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.894 09:11:47 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.894 09:11:47 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.894 09:11:47 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.894 09:11:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.894 09:11:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.894 09:11:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.894 09:11:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.894 09:11:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.894 09:11:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.894 09:11:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.894 09:11:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.894 09:11:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.894 09:11:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.894 09:11:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.894 09:11:47 version -- scripts/common.sh@344 -- # case "$op" in 00:05:55.894 09:11:47 version -- scripts/common.sh@345 -- # : 1 00:05:55.894 09:11:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.894 09:11:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.894 09:11:47 version -- scripts/common.sh@365 -- # decimal 1 00:05:55.894 09:11:47 version -- scripts/common.sh@353 -- # local d=1 00:05:55.894 09:11:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.894 09:11:47 version -- scripts/common.sh@355 -- # echo 1 00:05:55.894 09:11:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.894 09:11:47 version -- scripts/common.sh@366 -- # decimal 2 00:05:55.894 09:11:47 version -- scripts/common.sh@353 -- # local d=2 00:05:55.894 09:11:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.894 09:11:47 version -- scripts/common.sh@355 -- # echo 2 00:05:55.894 09:11:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.894 09:11:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.894 09:11:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.894 09:11:47 version -- scripts/common.sh@368 -- # return 0 00:05:55.894 09:11:47 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.894 09:11:47 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.894 --rc genhtml_branch_coverage=1 00:05:55.894 --rc genhtml_function_coverage=1 00:05:55.894 --rc genhtml_legend=1 00:05:55.894 --rc geninfo_all_blocks=1 00:05:55.894 --rc geninfo_unexecuted_blocks=1 00:05:55.894 00:05:55.894 ' 00:05:55.894 09:11:47 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.894 --rc genhtml_branch_coverage=1 00:05:55.894 --rc genhtml_function_coverage=1 00:05:55.894 --rc genhtml_legend=1 00:05:55.894 --rc geninfo_all_blocks=1 00:05:55.894 --rc geninfo_unexecuted_blocks=1 00:05:55.894 00:05:55.894 ' 00:05:55.894 09:11:47 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.894 --rc genhtml_branch_coverage=1 00:05:55.894 --rc genhtml_function_coverage=1 00:05:55.894 --rc genhtml_legend=1 00:05:55.894 --rc geninfo_all_blocks=1 00:05:55.894 --rc geninfo_unexecuted_blocks=1 00:05:55.894 00:05:55.894 ' 00:05:55.894 09:11:47 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.894 --rc genhtml_branch_coverage=1 00:05:55.894 --rc genhtml_function_coverage=1 00:05:55.894 --rc genhtml_legend=1 00:05:55.894 --rc geninfo_all_blocks=1 00:05:55.894 --rc geninfo_unexecuted_blocks=1 00:05:55.894 00:05:55.894 ' 00:05:55.894 09:11:47 version -- app/version.sh@17 -- # get_header_version major 00:05:55.895 09:11:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:55.895 09:11:47 version -- app/version.sh@14 -- # cut -f2 00:05:55.895 09:11:47 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.895 09:11:47 version -- app/version.sh@17 -- # major=25 00:05:55.895 09:11:47 version -- app/version.sh@18 -- # get_header_version minor 00:05:55.895 09:11:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:55.895 09:11:47 version -- app/version.sh@14 -- # cut -f2 00:05:55.895 09:11:47 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.895 09:11:47 version -- app/version.sh@18 -- # minor=1 00:05:55.895 09:11:47 version -- app/version.sh@19 -- # get_header_version patch 00:05:55.895 09:11:47 version -- app/version.sh@14 -- # cut -f2 00:05:55.895 09:11:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:55.895 09:11:47 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.895 09:11:47 version -- app/version.sh@19 -- # patch=0 00:05:55.895 09:11:47 version -- app/version.sh@20 -- # get_header_version suffix 00:05:55.895 09:11:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:55.895 09:11:47 version -- app/version.sh@14 -- # tr -d '"' 00:05:55.895 09:11:47 version -- app/version.sh@14 -- # cut -f2 00:05:55.895 09:11:47 version -- app/version.sh@20 -- # suffix=-pre 00:05:55.895 09:11:47 version -- app/version.sh@22 -- # version=25.1 00:05:55.895 09:11:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:55.895 09:11:47 version -- app/version.sh@28 -- # version=25.1rc0 00:05:55.895 09:11:47 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:55.895 09:11:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:55.895 09:11:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:55.895 09:11:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:55.895 00:05:55.895 real 0m0.176s 00:05:55.895 user 0m0.113s 00:05:55.895 sys 0m0.092s 00:05:55.895 09:11:47 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.895 09:11:47 version -- common/autotest_common.sh@10 -- # set +x 00:05:55.895 ************************************ 00:05:55.895 END TEST version 00:05:55.895 ************************************ 00:05:55.895 09:11:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:55.895 09:11:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:55.895 09:11:47 -- spdk/autotest.sh@194 -- # uname -s 00:05:55.895 09:11:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:55.895 09:11:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:55.895 09:11:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:55.895 09:11:47 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:55.895 09:11:47 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:55.895 09:11:47 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:55.895 09:11:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.895 09:11:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.895 ************************************ 00:05:55.895 START TEST blockdev_nvme 00:05:55.895 ************************************ 00:05:55.895 09:11:47 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:55.895 * Looking for test storage... 00:05:55.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:55.895 09:11:47 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.895 09:11:47 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.895 09:11:47 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:56.153 09:11:47 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.153 09:11:47 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:56.153 09:11:47 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.153 09:11:47 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:56.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.153 --rc genhtml_branch_coverage=1 00:05:56.153 --rc genhtml_function_coverage=1 00:05:56.153 --rc genhtml_legend=1 00:05:56.153 --rc geninfo_all_blocks=1 00:05:56.153 --rc geninfo_unexecuted_blocks=1 00:05:56.153 00:05:56.153 ' 00:05:56.153 09:11:47 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:56.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.153 --rc genhtml_branch_coverage=1 00:05:56.153 --rc genhtml_function_coverage=1 00:05:56.153 --rc genhtml_legend=1 00:05:56.153 --rc geninfo_all_blocks=1 00:05:56.153 --rc geninfo_unexecuted_blocks=1 00:05:56.153 00:05:56.153 ' 00:05:56.153 09:11:47 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:56.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.153 --rc genhtml_branch_coverage=1 00:05:56.153 --rc genhtml_function_coverage=1 00:05:56.153 --rc genhtml_legend=1 00:05:56.153 --rc geninfo_all_blocks=1 00:05:56.153 --rc geninfo_unexecuted_blocks=1 00:05:56.153 00:05:56.153 ' 00:05:56.153 09:11:47 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:56.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.153 --rc genhtml_branch_coverage=1 00:05:56.153 --rc genhtml_function_coverage=1 00:05:56.153 --rc genhtml_legend=1 00:05:56.153 --rc geninfo_all_blocks=1 00:05:56.153 --rc geninfo_unexecuted_blocks=1 00:05:56.153 00:05:56.153 ' 00:05:56.153 09:11:47 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:56.153 09:11:47 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:56.153 09:11:47 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:56.153 09:11:47 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:56.153 09:11:47 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:56.153 09:11:47 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:56.153 09:11:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:56.153 09:11:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:56.153 09:11:47 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:56.153 09:11:47 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:56.153 09:11:47 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60191 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60191 00:05:56.154 09:11:47 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 60191 ']' 00:05:56.154 09:11:47 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:56.154 09:11:47 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.154 09:11:47 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.154 09:11:47 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.154 09:11:47 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.154 09:11:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:56.154 [2024-10-08 09:11:47.694117] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:56.154 [2024-10-08 09:11:47.694250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60191 ] 00:05:56.412 [2024-10-08 09:11:47.845647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.413 [2024-10-08 09:11:48.032159] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.979 09:11:48 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.979 09:11:48 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:05:56.979 09:11:48 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:56.979 09:11:48 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:56.979 09:11:48 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:56.979 09:11:48 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:56.979 09:11:48 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:57.236 09:11:48 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:57.236 09:11:48 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.236 09:11:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.494 09:11:48 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.494 09:11:48 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:57.494 09:11:48 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.494 09:11:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.494 09:11:48 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.494 09:11:48 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:57.494 09:11:48 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:57.494 09:11:48 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.494 09:11:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.494 09:11:48 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.494 09:11:48 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:57.494 09:11:48 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.494 09:11:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.494 09:11:49 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.494 09:11:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:57.494 09:11:49 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.494 09:11:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.494 09:11:49 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.494 09:11:49 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:57.494 09:11:49 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:57.494 09:11:49 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.494 09:11:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.494 09:11:49 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:57.494 09:11:49 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.494 09:11:49 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:57.494 09:11:49 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:57.495 09:11:49 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3d93af49-9ac6-46b2-a681-a4fdb2d64c9d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3d93af49-9ac6-46b2-a681-a4fdb2d64c9d",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "3549e5c1-2c00-4c0f-bece-d93539e2968f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3549e5c1-2c00-4c0f-bece-d93539e2968f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9f02c41d-8025-44a3-aafc-90f9ffb3e8fb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9f02c41d-8025-44a3-aafc-90f9ffb3e8fb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e3a3f639-81a8-4ffc-901d-1aea7ea92056"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3a3f639-81a8-4ffc-901d-1aea7ea92056",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0948fbc5-1c01-438a-bdb7-83c97b4c1506"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0948fbc5-1c01-438a-bdb7-83c97b4c1506",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1aeb7ca8-386b-4433-81b8-3842dd5edc3a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1aeb7ca8-386b-4433-81b8-3842dd5edc3a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:57.495 09:11:49 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:57.495 09:11:49 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:57.495 09:11:49 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:57.495 09:11:49 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60191 00:05:57.495 09:11:49 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 60191 ']' 00:05:57.495 09:11:49 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 60191 00:05:57.495 09:11:49 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:05:57.495 09:11:49 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.495 09:11:49 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60191 00:05:57.495 09:11:49 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.495 09:11:49 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.495 killing process with pid 60191 00:05:57.495 09:11:49 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60191' 00:05:57.495 09:11:49 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 60191 00:05:57.495 09:11:49 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 60191 00:05:59.390 09:11:50 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:59.390 09:11:50 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:59.390 09:11:50 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:05:59.390 09:11:50 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.390 09:11:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.390 ************************************ 00:05:59.390 START TEST bdev_hello_world 00:05:59.390 ************************************ 00:05:59.390 09:11:50 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:59.390 [2024-10-08 09:11:50.720293] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:05:59.390 [2024-10-08 09:11:50.720440] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60275 ] 00:05:59.390 [2024-10-08 09:11:50.869654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.390 [2024-10-08 09:11:51.025144] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.955 [2024-10-08 09:11:51.521796] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:59.955 [2024-10-08 09:11:51.521852] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:59.955 [2024-10-08 09:11:51.521873] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:59.955 [2024-10-08 09:11:51.523999] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:59.955 [2024-10-08 09:11:51.524425] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:59.955 [2024-10-08 09:11:51.524451] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:59.955 [2024-10-08 09:11:51.524617] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:59.955 00:05:59.955 [2024-10-08 09:11:51.524639] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:00.540 00:06:00.540 real 0m1.532s 00:06:00.540 user 0m1.261s 00:06:00.540 sys 0m0.163s 00:06:00.540 09:11:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.540 09:11:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:00.540 ************************************ 00:06:00.540 END TEST bdev_hello_world 00:06:00.540 ************************************ 00:06:00.540 09:11:52 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:00.541 09:11:52 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:00.541 09:11:52 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.541 09:11:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 ************************************ 00:06:00.798 START TEST bdev_bounds 00:06:00.798 ************************************ 00:06:00.798 09:11:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:00.798 09:11:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60310 00:06:00.798 09:11:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.798 Process bdevio pid: 60310 00:06:00.798 09:11:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60310' 00:06:00.798 09:11:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60310 00:06:00.798 09:11:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 60310 ']' 00:06:00.798 09:11:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:00.798 09:11:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.798 09:11:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.798 09:11:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.798 09:11:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.798 09:11:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:00.798 [2024-10-08 09:11:52.292480] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:00.798 [2024-10-08 09:11:52.292607] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60310 ] 00:06:00.798 [2024-10-08 09:11:52.438778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.056 [2024-10-08 09:11:52.596287] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.056 [2024-10-08 09:11:52.596981] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.056 [2024-10-08 09:11:52.596994] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.621 09:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.621 09:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:01.621 09:11:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:01.621 I/O targets: 00:06:01.621 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:01.621 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:01.621 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:01.621 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:01.621 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:01.621 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:01.621 00:06:01.621 00:06:01.621 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.621 http://cunit.sourceforge.net/ 00:06:01.621 00:06:01.621 00:06:01.621 Suite: bdevio tests on: Nvme3n1 00:06:01.621 Test: blockdev write read block ...passed 00:06:01.621 Test: blockdev write zeroes read block ...passed 00:06:01.621 Test: blockdev write zeroes read no split ...passed 00:06:01.621 Test: blockdev write zeroes read split ...passed 00:06:01.879 Test: blockdev write zeroes read split partial ...passed 00:06:01.879 Test: blockdev reset ...[2024-10-08 09:11:53.316149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:01.879 [2024-10-08 09:11:53.320416] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:01.879 passed 00:06:01.879 Test: blockdev write read 8 blocks ...passed 00:06:01.879 Test: blockdev write read size > 128k ...passed 00:06:01.879 Test: blockdev write read invalid size ...passed 00:06:01.879 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:01.879 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:01.879 Test: blockdev write read max offset ...passed 00:06:01.879 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:01.879 Test: blockdev writev readv 8 blocks ...passed 00:06:01.879 Test: blockdev writev readv 30 x 1block ...passed 00:06:01.879 Test: blockdev writev readv block ...passed 00:06:01.879 Test: blockdev writev readv size > 128k ...passed 00:06:01.879 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:01.879 Test: blockdev comparev and writev ...[2024-10-08 09:11:53.326730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1e0a000 len:0x1000 00:06:01.879 [2024-10-08 09:11:53.326789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:01.879 passed 00:06:01.879 Test: blockdev nvme passthru rw ...passed 00:06:01.879 Test: blockdev nvme passthru vendor specific ...passed 00:06:01.879 Test: blockdev nvme admin passthru ...[2024-10-08 09:11:53.327326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:01.879 [2024-10-08 09:11:53.327352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:01.879 passed 00:06:01.879 Test: blockdev copy ...passed 00:06:01.879 Suite: bdevio tests on: Nvme2n3 00:06:01.879 Test: blockdev write read block ...passed 00:06:01.879 Test: blockdev write zeroes read block ...passed 00:06:01.879 Test: blockdev write zeroes read no split ...passed 00:06:01.879 Test: blockdev write zeroes read split ...passed 00:06:01.879 Test: blockdev write zeroes read split partial ...passed 00:06:01.879 Test: blockdev reset ...[2024-10-08 09:11:53.386528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:01.879 [2024-10-08 09:11:53.390325] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:01.879 passed 00:06:01.879 Test: blockdev write read 8 blocks ...passed 00:06:01.879 Test: blockdev write read size > 128k ...passed 00:06:01.879 Test: blockdev write read invalid size ...passed 00:06:01.879 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:01.879 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:01.879 Test: blockdev write read max offset ...passed 00:06:01.879 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:01.879 Test: blockdev writev readv 8 blocks ...passed 00:06:01.879 Test: blockdev writev readv 30 x 1block ...passed 00:06:01.879 Test: blockdev writev readv block ...passed 00:06:01.879 Test: blockdev writev readv size > 128k ...passed 00:06:01.879 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:01.879 Test: blockdev comparev and writev ...[2024-10-08 09:11:53.395795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a7404000 len:0x1000 00:06:01.879 passed 00:06:01.879 Test: blockdev nvme passthru rw ...[2024-10-08 09:11:53.395848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:01.879 passed 00:06:01.879 Test: blockdev nvme passthru vendor specific ...[2024-10-08 09:11:53.396288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:01.879 passed 00:06:01.879 Test: blockdev nvme admin passthru ...[2024-10-08 09:11:53.396315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:01.879 passed 00:06:01.879 Test: blockdev copy ...passed 00:06:01.879 Suite: bdevio tests on: Nvme2n2 00:06:01.879 Test: blockdev write read block ...passed 00:06:01.879 Test: blockdev write zeroes read block ...passed 00:06:01.879 Test: blockdev write zeroes read no split ...passed 00:06:01.879 Test: blockdev write zeroes read split ...passed 00:06:01.879 Test: blockdev write zeroes read split partial ...passed 00:06:01.879 Test: blockdev reset ...[2024-10-08 09:11:53.441530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:01.879 [2024-10-08 09:11:53.444771] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:01.879 passed 00:06:01.879 Test: blockdev write read 8 blocks ...passed 00:06:01.879 Test: blockdev write read size > 128k ...passed 00:06:01.879 Test: blockdev write read invalid size ...passed 00:06:01.879 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:01.879 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:01.879 Test: blockdev write read max offset ...passed 00:06:01.879 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:01.879 Test: blockdev writev readv 8 blocks ...passed 00:06:01.879 Test: blockdev writev readv 30 x 1block ...passed 00:06:01.879 Test: blockdev writev readv block ...passed 00:06:01.879 Test: blockdev writev readv size > 128k ...passed 00:06:01.879 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:01.879 Test: blockdev comparev and writev ...[2024-10-08 09:11:53.452296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf03a000 len:0x1000 00:06:01.879 [2024-10-08 09:11:53.452356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:01.879 passed 00:06:01.879 Test: blockdev nvme passthru rw ...passed 00:06:01.879 Test: blockdev nvme passthru vendor specific ...passed 00:06:01.879 Test: blockdev nvme admin passthru ...[2024-10-08 09:11:53.452985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:01.879 [2024-10-08 09:11:53.453013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:01.879 passed 00:06:01.879 Test: blockdev copy ...passed 00:06:01.879 Suite: bdevio tests on: Nvme2n1 00:06:01.879 Test: blockdev write read block ...passed 00:06:01.879 Test: blockdev write zeroes read block ...passed 00:06:01.879 Test: blockdev write zeroes read no split ...passed 00:06:01.879 Test: blockdev write zeroes read split ...passed 00:06:01.879 Test: blockdev write zeroes read split partial ...passed 00:06:01.879 Test: blockdev reset ...[2024-10-08 09:11:53.511512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:01.879 [2024-10-08 09:11:53.514520] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:01.879 passed 00:06:01.879 Test: blockdev write read 8 blocks ...passed 00:06:01.879 Test: blockdev write read size > 128k ...passed 00:06:01.879 Test: blockdev write read invalid size ...passed 00:06:01.880 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:01.880 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:01.880 Test: blockdev write read max offset ...passed 00:06:01.880 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:01.880 Test: blockdev writev readv 8 blocks ...passed 00:06:01.880 Test: blockdev writev readv 30 x 1block ...passed 00:06:01.880 Test: blockdev writev readv block ...passed 00:06:01.880 Test: blockdev writev readv size > 128k ...passed 00:06:01.880 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:01.880 Test: blockdev comparev and writev ...[2024-10-08 09:11:53.520015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf034000 len:0x1000 00:06:01.880 [2024-10-08 09:11:53.520062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:01.880 passed 00:06:01.880 Test: blockdev nvme passthru rw ...passed 00:06:01.880 Test: blockdev nvme passthru vendor specific ...[2024-10-08 09:11:53.520556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:01.880 [2024-10-08 09:11:53.520580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:01.880 passed 00:06:01.880 Test: blockdev nvme admin passthru ...passed 00:06:01.880 Test: blockdev copy ...passed 00:06:01.880 Suite: bdevio tests on: Nvme1n1 00:06:01.880 Test: blockdev write read block ...passed 00:06:01.880 Test: blockdev write zeroes read block ...passed 00:06:01.880 Test: blockdev write zeroes read no split ...passed 00:06:01.880 Test: blockdev write zeroes read split ...passed 00:06:02.137 Test: blockdev write zeroes read split partial ...passed 00:06:02.137 Test: blockdev reset ...[2024-10-08 09:11:53.568279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:02.137 [2024-10-08 09:11:53.571069] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:02.137 passed 00:06:02.137 Test: blockdev write read 8 blocks ...passed 00:06:02.137 Test: blockdev write read size > 128k ...passed 00:06:02.137 Test: blockdev write read invalid size ...passed 00:06:02.137 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:02.137 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:02.137 Test: blockdev write read max offset ...passed 00:06:02.137 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:02.137 Test: blockdev writev readv 8 blocks ...passed 00:06:02.137 Test: blockdev writev readv 30 x 1block ...passed 00:06:02.137 Test: blockdev writev readv block ...passed 00:06:02.137 Test: blockdev writev readv size > 128k ...passed 00:06:02.137 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:02.137 Test: blockdev comparev and writev ...[2024-10-08 09:11:53.576900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf030000 len:0x1000 00:06:02.137 passed 00:06:02.137 Test: blockdev nvme passthru rw ...[2024-10-08 09:11:53.576953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:02.137 passed 00:06:02.137 Test: blockdev nvme passthru vendor specific ...[2024-10-08 09:11:53.577508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:02.138 [2024-10-08 09:11:53.577531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:02.138 passed 00:06:02.138 Test: blockdev nvme admin passthru ...passed 00:06:02.138 Test: blockdev copy ...passed 00:06:02.138 Suite: bdevio tests on: Nvme0n1 00:06:02.138 Test: blockdev write read block ...passed 00:06:02.138 Test: blockdev write zeroes read block ...passed 00:06:02.138 Test: blockdev write zeroes read no split ...passed 00:06:02.138 Test: blockdev write zeroes read split ...passed 00:06:02.138 Test: blockdev write zeroes read split partial ...passed 00:06:02.138 Test: blockdev reset ...[2024-10-08 09:11:53.623081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:02.138 [2024-10-08 09:11:53.625864] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:02.138 passed 00:06:02.138 Test: blockdev write read 8 blocks ...passed 00:06:02.138 Test: blockdev write read size > 128k ...passed 00:06:02.138 Test: blockdev write read invalid size ...passed 00:06:02.138 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:02.138 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:02.138 Test: blockdev write read max offset ...passed 00:06:02.138 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:02.138 Test: blockdev writev readv 8 blocks ...passed 00:06:02.138 Test: blockdev writev readv 30 x 1block ...passed 00:06:02.138 Test: blockdev writev readv block ...passed 00:06:02.138 Test: blockdev writev readv size > 128k ...passed 00:06:02.138 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:02.138 Test: blockdev comparev and writev ...[2024-10-08 09:11:53.630578] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:02.138 separate metadata which is not supported yet. 00:06:02.138 passed 00:06:02.138 Test: blockdev nvme passthru rw ...passed 00:06:02.138 Test: blockdev nvme passthru vendor specific ...[2024-10-08 09:11:53.630922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:02.138 passed 00:06:02.138 Test: blockdev nvme admin passthru ...[2024-10-08 09:11:53.630963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:02.138 passed 00:06:02.138 Test: blockdev copy ...passed 00:06:02.138 00:06:02.138 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.138 suites 6 6 n/a 0 0 00:06:02.138 tests 138 138 138 0 0 00:06:02.138 asserts 893 893 893 0 n/a 00:06:02.138 00:06:02.138 Elapsed time = 1.004 seconds 00:06:02.138 0 00:06:02.138 09:11:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60310 00:06:02.138 09:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 60310 ']' 00:06:02.138 09:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 60310 00:06:02.138 09:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:02.138 09:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.138 09:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60310 00:06:02.138 09:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.138 09:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.138 killing process with pid 60310 00:06:02.138 09:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60310' 00:06:02.138 09:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 60310 00:06:02.138 09:11:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 60310 00:06:03.069 09:11:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:03.069 00:06:03.069 real 0m2.180s 00:06:03.069 user 0m5.440s 00:06:03.069 sys 0m0.288s 00:06:03.069 09:11:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.069 09:11:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:03.069 ************************************ 00:06:03.069 END TEST bdev_bounds 00:06:03.069 ************************************ 00:06:03.069 09:11:54 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:03.069 09:11:54 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:03.069 09:11:54 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.069 09:11:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.069 ************************************ 00:06:03.069 START TEST bdev_nbd 00:06:03.069 ************************************ 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60365 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60365 /var/tmp/spdk-nbd.sock 00:06:03.069 09:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 60365 ']' 00:06:03.070 09:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.070 09:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.070 09:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.070 09:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.070 09:11:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:03.070 09:11:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:03.070 [2024-10-08 09:11:54.524830] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:03.070 [2024-10-08 09:11:54.524956] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.070 [2024-10-08 09:11:54.675377] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.328 [2024-10-08 09:11:54.867875] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:03.893 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.151 1+0 records in 00:06:04.151 1+0 records out 00:06:04.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400692 s, 10.2 MB/s 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:04.151 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.408 1+0 records in 00:06:04.408 1+0 records out 00:06:04.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425934 s, 9.6 MB/s 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:04.408 09:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.665 1+0 records in 00:06:04.665 1+0 records out 00:06:04.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297433 s, 13.8 MB/s 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:04.665 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:04.922 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:04.922 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:04.922 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:04.922 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:04.922 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:04.922 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:04.922 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:04.922 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:04.922 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:04.922 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:04.922 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:04.922 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.922 1+0 records in 00:06:04.922 1+0 records out 00:06:04.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486565 s, 8.4 MB/s 00:06:04.923 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.923 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:04.923 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.923 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:04.923 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:04.923 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:04.923 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:04.923 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:05.179 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:05.179 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:05.179 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:05.179 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:05.179 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:05.179 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:05.179 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:05.179 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:05.180 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:05.180 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:05.180 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:05.180 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.180 1+0 records in 00:06:05.180 1+0 records out 00:06:05.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488211 s, 8.4 MB/s 00:06:05.180 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.180 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:05.180 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.180 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.180 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:05.180 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:05.180 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:05.180 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.463 1+0 records in 00:06:05.463 1+0 records out 00:06:05.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486379 s, 8.4 MB/s 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:05.463 09:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.463 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:05.463 { 00:06:05.463 "nbd_device": "/dev/nbd0", 00:06:05.463 "bdev_name": "Nvme0n1" 00:06:05.463 }, 00:06:05.463 { 00:06:05.463 "nbd_device": "/dev/nbd1", 00:06:05.463 "bdev_name": "Nvme1n1" 00:06:05.463 }, 00:06:05.463 { 00:06:05.463 "nbd_device": "/dev/nbd2", 00:06:05.463 "bdev_name": "Nvme2n1" 00:06:05.463 }, 00:06:05.463 { 00:06:05.463 "nbd_device": "/dev/nbd3", 00:06:05.463 "bdev_name": "Nvme2n2" 00:06:05.463 }, 00:06:05.463 { 00:06:05.463 "nbd_device": "/dev/nbd4", 00:06:05.463 "bdev_name": "Nvme2n3" 00:06:05.463 }, 00:06:05.463 { 00:06:05.463 "nbd_device": "/dev/nbd5", 00:06:05.463 "bdev_name": "Nvme3n1" 00:06:05.463 } 00:06:05.463 ]' 00:06:05.463 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:05.463 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:05.463 { 00:06:05.463 "nbd_device": "/dev/nbd0", 00:06:05.463 "bdev_name": "Nvme0n1" 00:06:05.463 }, 00:06:05.463 { 00:06:05.463 "nbd_device": "/dev/nbd1", 00:06:05.463 "bdev_name": "Nvme1n1" 00:06:05.463 }, 00:06:05.463 { 00:06:05.463 "nbd_device": "/dev/nbd2", 00:06:05.463 "bdev_name": "Nvme2n1" 00:06:05.463 }, 00:06:05.463 { 00:06:05.463 "nbd_device": "/dev/nbd3", 00:06:05.463 "bdev_name": "Nvme2n2" 00:06:05.463 }, 00:06:05.463 { 00:06:05.463 "nbd_device": "/dev/nbd4", 00:06:05.463 "bdev_name": "Nvme2n3" 00:06:05.463 }, 00:06:05.463 { 00:06:05.463 "nbd_device": "/dev/nbd5", 00:06:05.463 "bdev_name": "Nvme3n1" 00:06:05.463 } 00:06:05.463 ]' 00:06:05.463 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:05.720 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:05.720 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.720 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:05.720 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.720 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:05.720 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.720 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.720 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.720 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.720 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.721 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.721 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.721 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.721 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:05.721 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.721 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.721 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.978 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.978 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.978 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.978 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.978 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.978 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.978 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:05.978 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.978 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.978 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:06.235 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:06.235 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:06.235 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:06.235 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.235 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.235 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:06.235 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.235 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.235 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.235 09:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:06.492 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:06.492 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:06.492 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:06.492 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.492 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.492 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:06.492 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.492 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.492 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.492 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:06.750 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:06.750 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:06.750 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:06.750 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.750 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.750 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:06.750 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:06.750 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.750 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.750 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.008 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:07.265 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:07.265 /dev/nbd0 00:06:07.522 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.523 1+0 records in 00:06:07.523 1+0 records out 00:06:07.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056071 s, 7.3 MB/s 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:07.523 09:11:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:07.523 /dev/nbd1 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.523 1+0 records in 00:06:07.523 1+0 records out 00:06:07.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465255 s, 8.8 MB/s 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:07.523 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:07.781 /dev/nbd10 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.781 1+0 records in 00:06:07.781 1+0 records out 00:06:07.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403971 s, 10.1 MB/s 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:07.781 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:08.039 /dev/nbd11 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.039 1+0 records in 00:06:08.039 1+0 records out 00:06:08.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536094 s, 7.6 MB/s 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:08.039 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:08.297 /dev/nbd12 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.297 1+0 records in 00:06:08.297 1+0 records out 00:06:08.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043412 s, 9.4 MB/s 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:08.297 09:11:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:08.554 /dev/nbd13 00:06:08.554 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:08.554 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:08.554 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:08.554 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:08.554 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:08.554 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:08.554 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:08.554 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:08.554 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:08.555 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:08.555 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.555 1+0 records in 00:06:08.555 1+0 records out 00:06:08.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474603 s, 8.6 MB/s 00:06:08.555 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.555 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:08.555 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.555 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:08.555 09:12:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:08.555 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.555 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:08.555 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.555 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.555 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.812 { 00:06:08.812 "nbd_device": "/dev/nbd0", 00:06:08.812 "bdev_name": "Nvme0n1" 00:06:08.812 }, 00:06:08.812 { 00:06:08.812 "nbd_device": "/dev/nbd1", 00:06:08.812 "bdev_name": "Nvme1n1" 00:06:08.812 }, 00:06:08.812 { 00:06:08.812 "nbd_device": "/dev/nbd10", 00:06:08.812 "bdev_name": "Nvme2n1" 00:06:08.812 }, 00:06:08.812 { 00:06:08.812 "nbd_device": "/dev/nbd11", 00:06:08.812 "bdev_name": "Nvme2n2" 00:06:08.812 }, 00:06:08.812 { 00:06:08.812 "nbd_device": "/dev/nbd12", 00:06:08.812 "bdev_name": "Nvme2n3" 00:06:08.812 }, 00:06:08.812 { 00:06:08.812 "nbd_device": "/dev/nbd13", 00:06:08.812 "bdev_name": "Nvme3n1" 00:06:08.812 } 00:06:08.812 ]' 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.812 { 00:06:08.812 "nbd_device": "/dev/nbd0", 00:06:08.812 "bdev_name": "Nvme0n1" 00:06:08.812 }, 00:06:08.812 { 00:06:08.812 "nbd_device": "/dev/nbd1", 00:06:08.812 "bdev_name": "Nvme1n1" 00:06:08.812 }, 00:06:08.812 { 00:06:08.812 "nbd_device": "/dev/nbd10", 00:06:08.812 "bdev_name": "Nvme2n1" 00:06:08.812 }, 00:06:08.812 { 00:06:08.812 "nbd_device": "/dev/nbd11", 00:06:08.812 "bdev_name": "Nvme2n2" 00:06:08.812 }, 00:06:08.812 { 00:06:08.812 "nbd_device": "/dev/nbd12", 00:06:08.812 "bdev_name": "Nvme2n3" 00:06:08.812 }, 00:06:08.812 { 00:06:08.812 "nbd_device": "/dev/nbd13", 00:06:08.812 "bdev_name": "Nvme3n1" 00:06:08.812 } 00:06:08.812 ]' 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.812 /dev/nbd1 00:06:08.812 /dev/nbd10 00:06:08.812 /dev/nbd11 00:06:08.812 /dev/nbd12 00:06:08.812 /dev/nbd13' 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.812 /dev/nbd1 00:06:08.812 /dev/nbd10 00:06:08.812 /dev/nbd11 00:06:08.812 /dev/nbd12 00:06:08.812 /dev/nbd13' 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:08.812 256+0 records in 00:06:08.812 256+0 records out 00:06:08.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0074682 s, 140 MB/s 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.812 256+0 records in 00:06:08.812 256+0 records out 00:06:08.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0662527 s, 15.8 MB/s 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.812 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.069 256+0 records in 00:06:09.069 256+0 records out 00:06:09.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0721851 s, 14.5 MB/s 00:06:09.069 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.069 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:09.069 256+0 records in 00:06:09.069 256+0 records out 00:06:09.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0657069 s, 16.0 MB/s 00:06:09.069 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.069 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:09.069 256+0 records in 00:06:09.069 256+0 records out 00:06:09.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0730491 s, 14.4 MB/s 00:06:09.069 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.069 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:09.331 256+0 records in 00:06:09.331 256+0 records out 00:06:09.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0660738 s, 15.9 MB/s 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:09.331 256+0 records in 00:06:09.331 256+0 records out 00:06:09.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0660633 s, 15.9 MB/s 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:09.331 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:09.332 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:09.332 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.332 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.332 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.332 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:09.332 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.332 09:12:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.605 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:09.863 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:09.863 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:09.863 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:09.863 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.863 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.863 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:09.863 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.863 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.863 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.863 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:10.120 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:10.120 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:10.120 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:10.120 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.120 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.120 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:10.120 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.120 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.120 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.120 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:10.377 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:10.377 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:10.377 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:10.377 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.377 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.377 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:10.377 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.377 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.377 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.377 09:12:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:10.634 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:10.634 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:10.634 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:10.634 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.634 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.634 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:10.634 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.634 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.634 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.634 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.634 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.634 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:10.635 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:10.892 malloc_lvol_verify 00:06:10.892 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:11.149 5357d326-9edf-4f19-88a6-b8cfb739f0e7 00:06:11.149 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:11.407 aca378db-eb7e-4ecf-8836-797e7c3b6bbd 00:06:11.407 09:12:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:11.407 /dev/nbd0 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:11.664 mke2fs 1.47.0 (5-Feb-2023) 00:06:11.664 Discarding device blocks: 0/4096 done 00:06:11.664 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:11.664 00:06:11.664 Allocating group tables: 0/1 done 00:06:11.664 Writing inode tables: 0/1 done 00:06:11.664 Creating journal (1024 blocks): done 00:06:11.664 Writing superblocks and filesystem accounting information: 0/1 done 00:06:11.664 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60365 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 60365 ']' 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 60365 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.664 09:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60365 00:06:11.922 killing process with pid 60365 00:06:11.922 09:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.922 09:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.922 09:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60365' 00:06:11.922 09:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 60365 00:06:11.922 09:12:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 60365 00:06:12.856 09:12:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:12.856 00:06:12.856 real 0m9.800s 00:06:12.856 user 0m13.994s 00:06:12.856 sys 0m2.988s 00:06:12.856 09:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.856 ************************************ 00:06:12.856 END TEST bdev_nbd 00:06:12.857 ************************************ 00:06:12.857 09:12:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:12.857 skipping fio tests on NVMe due to multi-ns failures. 00:06:12.857 09:12:04 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:12.857 09:12:04 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:12.857 09:12:04 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:12.857 09:12:04 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:12.857 09:12:04 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:12.857 09:12:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:12.857 09:12:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.857 09:12:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.857 ************************************ 00:06:12.857 START TEST bdev_verify 00:06:12.857 ************************************ 00:06:12.857 09:12:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:12.857 [2024-10-08 09:12:04.356253] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:12.857 [2024-10-08 09:12:04.356400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60738 ] 00:06:12.857 [2024-10-08 09:12:04.505538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.115 [2024-10-08 09:12:04.661358] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.115 [2024-10-08 09:12:04.661386] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.678 Running I/O for 5 seconds... 00:06:15.983 25088.00 IOPS, 98.00 MiB/s [2024-10-08T09:12:08.598Z] 25216.00 IOPS, 98.50 MiB/s [2024-10-08T09:12:09.528Z] 25365.33 IOPS, 99.08 MiB/s [2024-10-08T09:12:10.460Z] 25344.00 IOPS, 99.00 MiB/s [2024-10-08T09:12:10.460Z] 25024.00 IOPS, 97.75 MiB/s 00:06:18.777 Latency(us) 00:06:18.777 [2024-10-08T09:12:10.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:18.777 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:18.777 Verification LBA range: start 0x0 length 0xbd0bd 00:06:18.777 Nvme0n1 : 5.06 2024.87 7.91 0.00 0.00 63052.03 13006.38 81062.99 00:06:18.777 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:18.777 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:18.777 Nvme0n1 : 5.05 2103.12 8.22 0.00 0.00 60334.57 12401.43 55251.89 00:06:18.777 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:18.777 Verification LBA range: start 0x0 length 0xa0000 00:06:18.777 Nvme1n1 : 5.06 2023.66 7.90 0.00 0.00 62945.20 14821.22 72593.72 00:06:18.777 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:18.777 Verification LBA range: start 0xa0000 length 0xa0000 00:06:18.777 Nvme1n1 : 5.06 2110.85 8.25 0.00 0.00 59984.59 4411.08 56865.08 00:06:18.777 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:18.777 Verification LBA range: start 0x0 length 0x80000 00:06:18.777 Nvme2n1 : 5.06 2023.10 7.90 0.00 0.00 62714.36 13510.50 60494.77 00:06:18.777 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:18.777 Verification LBA range: start 0x80000 length 0x80000 00:06:18.777 Nvme2n1 : 5.07 2119.58 8.28 0.00 0.00 59673.54 5671.38 58881.58 00:06:18.777 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:18.777 Verification LBA range: start 0x0 length 0x80000 00:06:18.777 Nvme2n2 : 5.06 2022.57 7.90 0.00 0.00 62562.91 12653.49 55655.19 00:06:18.777 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:18.777 Verification LBA range: start 0x80000 length 0x80000 00:06:18.777 Nvme2n2 : 5.05 2105.51 8.22 0.00 0.00 60639.04 10637.00 72997.02 00:06:18.777 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:18.777 Verification LBA range: start 0x0 length 0x80000 00:06:18.777 Nvme2n3 : 5.09 2035.02 7.95 0.00 0.00 62133.19 7763.50 57268.38 00:06:18.777 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:18.777 Verification LBA range: start 0x80000 length 0x80000 00:06:18.777 Nvme2n3 : 5.05 2104.93 8.22 0.00 0.00 60531.02 12703.90 65737.65 00:06:18.777 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:18.777 Verification LBA range: start 0x0 length 0x20000 00:06:18.777 Nvme3n1 : 5.10 2033.80 7.94 0.00 0.00 62056.26 9779.99 58478.28 00:06:18.777 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:18.777 Verification LBA range: start 0x20000 length 0x20000 00:06:18.777 Nvme3n1 : 5.05 2104.36 8.22 0.00 0.00 60437.81 12905.55 56058.49 00:06:18.777 [2024-10-08T09:12:10.460Z] =================================================================================================================== 00:06:18.777 [2024-10-08T09:12:10.460Z] Total : 24811.36 96.92 0.00 0.00 61399.71 4411.08 81062.99 00:06:20.675 00:06:20.675 real 0m7.942s 00:06:20.675 user 0m14.797s 00:06:20.675 sys 0m0.230s 00:06:20.675 09:12:12 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.675 09:12:12 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:20.675 ************************************ 00:06:20.675 END TEST bdev_verify 00:06:20.675 ************************************ 00:06:20.675 09:12:12 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:20.675 09:12:12 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:20.675 09:12:12 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.675 09:12:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:20.675 ************************************ 00:06:20.675 START TEST bdev_verify_big_io 00:06:20.675 ************************************ 00:06:20.675 09:12:12 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:20.675 [2024-10-08 09:12:12.345567] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:20.675 [2024-10-08 09:12:12.345693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60836 ] 00:06:20.932 [2024-10-08 09:12:12.491797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.189 [2024-10-08 09:12:12.678624] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.189 [2024-10-08 09:12:12.678643] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.753 Running I/O for 5 seconds... 00:06:25.237 353.00 IOPS, 22.06 MiB/s [2024-10-08T09:12:17.491Z] 1432.50 IOPS, 89.53 MiB/s [2024-10-08T09:12:18.867Z] 1356.67 IOPS, 84.79 MiB/s [2024-10-08T09:12:19.433Z] 1352.25 IOPS, 84.52 MiB/s [2024-10-08T09:12:19.433Z] 1553.00 IOPS, 97.06 MiB/s 00:06:27.750 Latency(us) 00:06:27.750 [2024-10-08T09:12:19.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:27.750 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:27.750 Verification LBA range: start 0x0 length 0xbd0b 00:06:27.750 Nvme0n1 : 5.76 121.08 7.57 0.00 0.00 1001145.46 8368.44 1161499.57 00:06:27.750 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:27.750 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:27.750 Nvme0n1 : 5.68 112.61 7.04 0.00 0.00 1091066.56 17241.01 1122782.92 00:06:27.750 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:27.750 Verification LBA range: start 0x0 length 0xa000 00:06:27.750 Nvme1n1 : 5.77 122.95 7.68 0.00 0.00 959961.69 24500.38 1180857.90 00:06:27.750 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:27.750 Verification LBA range: start 0xa000 length 0xa000 00:06:27.750 Nvme1n1 : 5.82 114.00 7.13 0.00 0.00 1038204.12 81466.29 1142141.24 00:06:27.750 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:27.750 Verification LBA range: start 0x0 length 0x8000 00:06:27.750 Nvme2n1 : 5.85 128.65 8.04 0.00 0.00 895741.97 41338.09 1206669.00 00:06:27.750 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:27.750 Verification LBA range: start 0x8000 length 0x8000 00:06:27.750 Nvme2n1 : 5.89 119.46 7.47 0.00 0.00 970690.13 64124.46 1167952.34 00:06:27.750 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:27.750 Verification LBA range: start 0x0 length 0x8000 00:06:27.750 Nvme2n2 : 5.90 132.85 8.30 0.00 0.00 841349.78 52025.50 1232480.10 00:06:27.750 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:27.750 Verification LBA range: start 0x8000 length 0x8000 00:06:27.750 Nvme2n2 : 5.95 124.66 7.79 0.00 0.00 903112.85 15022.87 1206669.00 00:06:27.750 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:27.750 Verification LBA range: start 0x0 length 0x8000 00:06:27.750 Nvme2n3 : 5.96 141.29 8.83 0.00 0.00 765057.90 30045.74 1258291.20 00:06:27.750 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:27.750 Verification LBA range: start 0x8000 length 0x8000 00:06:27.750 Nvme2n3 : 5.95 128.81 8.05 0.00 0.00 848333.71 38515.00 1219574.55 00:06:27.750 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:27.750 Verification LBA range: start 0x0 length 0x2000 00:06:27.750 Nvme3n1 : 6.03 161.82 10.11 0.00 0.00 647698.91 740.43 1284102.30 00:06:27.750 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:27.750 Verification LBA range: start 0x2000 length 0x2000 00:06:27.750 Nvme3n1 : 6.02 145.03 9.06 0.00 0.00 730295.37 838.10 1219574.55 00:06:27.750 [2024-10-08T09:12:19.433Z] =================================================================================================================== 00:06:27.750 [2024-10-08T09:12:19.433Z] Total : 1553.22 97.08 0.00 0.00 876221.94 740.43 1284102.30 00:06:30.276 00:06:30.276 real 0m9.150s 00:06:30.276 user 0m17.154s 00:06:30.276 sys 0m0.260s 00:06:30.276 09:12:21 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.276 09:12:21 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:30.276 ************************************ 00:06:30.276 END TEST bdev_verify_big_io 00:06:30.276 ************************************ 00:06:30.276 09:12:21 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:30.276 09:12:21 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:30.276 09:12:21 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.276 09:12:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.276 ************************************ 00:06:30.276 START TEST bdev_write_zeroes 00:06:30.276 ************************************ 00:06:30.276 09:12:21 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:30.276 [2024-10-08 09:12:21.541200] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:30.276 [2024-10-08 09:12:21.541333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60945 ] 00:06:30.276 [2024-10-08 09:12:21.690713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.276 [2024-10-08 09:12:21.884134] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.841 Running I/O for 1 seconds... 00:06:32.211 70656.00 IOPS, 276.00 MiB/s 00:06:32.211 Latency(us) 00:06:32.211 [2024-10-08T09:12:23.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:32.211 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:32.211 Nvme0n1 : 1.02 11722.09 45.79 0.00 0.00 10895.94 7612.26 21475.64 00:06:32.211 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:32.211 Nvme1n1 : 1.02 11707.84 45.73 0.00 0.00 10895.44 7763.50 21273.99 00:06:32.211 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:32.211 Nvme2n1 : 1.02 11694.49 45.68 0.00 0.00 10868.85 7813.91 20265.75 00:06:32.211 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:32.211 Nvme2n2 : 1.02 11680.58 45.63 0.00 0.00 10838.70 7864.32 19660.80 00:06:32.211 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:32.211 Nvme2n3 : 1.03 11667.11 45.57 0.00 0.00 10815.92 7763.50 19660.80 00:06:32.211 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:32.211 Nvme3n1 : 1.03 11650.40 45.51 0.00 0.00 10799.09 6200.71 21374.82 00:06:32.211 [2024-10-08T09:12:23.894Z] =================================================================================================================== 00:06:32.211 [2024-10-08T09:12:23.894Z] Total : 70122.50 273.92 0.00 0.00 10852.32 6200.71 21475.64 00:06:32.808 00:06:32.808 real 0m2.846s 00:06:32.808 user 0m2.526s 00:06:32.808 sys 0m0.204s 00:06:32.808 ************************************ 00:06:32.808 END TEST bdev_write_zeroes 00:06:32.808 ************************************ 00:06:32.808 09:12:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.808 09:12:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:32.808 09:12:24 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:32.808 09:12:24 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:32.808 09:12:24 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.808 09:12:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.808 ************************************ 00:06:32.808 START TEST bdev_json_nonenclosed 00:06:32.808 ************************************ 00:06:32.808 09:12:24 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:32.808 [2024-10-08 09:12:24.419313] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:32.808 [2024-10-08 09:12:24.419453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60998 ] 00:06:33.065 [2024-10-08 09:12:24.566640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.326 [2024-10-08 09:12:24.766871] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.326 [2024-10-08 09:12:24.766947] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:33.326 [2024-10-08 09:12:24.766963] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:33.326 [2024-10-08 09:12:24.766972] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.586 00:06:33.586 real 0m0.697s 00:06:33.586 user 0m0.480s 00:06:33.586 sys 0m0.112s 00:06:33.586 09:12:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.586 ************************************ 00:06:33.586 END TEST bdev_json_nonenclosed 00:06:33.586 09:12:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:33.586 ************************************ 00:06:33.586 09:12:25 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.586 09:12:25 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:33.586 09:12:25 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.586 09:12:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.586 ************************************ 00:06:33.586 START TEST bdev_json_nonarray 00:06:33.586 ************************************ 00:06:33.586 09:12:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.586 [2024-10-08 09:12:25.177559] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:33.586 [2024-10-08 09:12:25.177732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61031 ] 00:06:33.847 [2024-10-08 09:12:25.339982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.847 [2024-10-08 09:12:25.497695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.847 [2024-10-08 09:12:25.497779] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:33.847 [2024-10-08 09:12:25.497793] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:33.847 [2024-10-08 09:12:25.497801] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.105 00:06:34.105 real 0m0.634s 00:06:34.105 user 0m0.414s 00:06:34.105 sys 0m0.115s 00:06:34.105 09:12:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.105 ************************************ 00:06:34.105 END TEST bdev_json_nonarray 00:06:34.105 ************************************ 00:06:34.105 09:12:25 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:34.105 09:12:25 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:34.105 09:12:25 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:34.106 09:12:25 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:34.106 09:12:25 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:34.106 09:12:25 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:34.106 09:12:25 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:34.106 09:12:25 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:34.106 09:12:25 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:34.106 09:12:25 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:34.106 09:12:25 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:34.106 09:12:25 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:34.106 00:06:34.106 real 0m38.292s 00:06:34.106 user 0m59.353s 00:06:34.106 sys 0m5.060s 00:06:34.106 09:12:25 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.106 09:12:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.106 ************************************ 00:06:34.106 END TEST blockdev_nvme 00:06:34.106 ************************************ 00:06:34.106 09:12:25 -- spdk/autotest.sh@209 -- # uname -s 00:06:34.106 09:12:25 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:34.106 09:12:25 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:34.106 09:12:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:34.106 09:12:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.106 09:12:25 -- common/autotest_common.sh@10 -- # set +x 00:06:34.363 ************************************ 00:06:34.363 START TEST blockdev_nvme_gpt 00:06:34.363 ************************************ 00:06:34.363 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:34.363 * Looking for test storage... 00:06:34.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:34.363 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:34.363 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:06:34.363 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:34.363 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:34.363 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.363 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.363 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.363 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.363 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.363 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.363 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.363 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.364 09:12:25 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:34.364 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.364 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:34.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.364 --rc genhtml_branch_coverage=1 00:06:34.364 --rc genhtml_function_coverage=1 00:06:34.364 --rc genhtml_legend=1 00:06:34.364 --rc geninfo_all_blocks=1 00:06:34.364 --rc geninfo_unexecuted_blocks=1 00:06:34.364 00:06:34.364 ' 00:06:34.364 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:34.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.364 --rc genhtml_branch_coverage=1 00:06:34.364 --rc genhtml_function_coverage=1 00:06:34.364 --rc genhtml_legend=1 00:06:34.364 --rc geninfo_all_blocks=1 00:06:34.364 --rc geninfo_unexecuted_blocks=1 00:06:34.364 00:06:34.364 ' 00:06:34.364 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:34.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.364 --rc genhtml_branch_coverage=1 00:06:34.364 --rc genhtml_function_coverage=1 00:06:34.364 --rc genhtml_legend=1 00:06:34.364 --rc geninfo_all_blocks=1 00:06:34.364 --rc geninfo_unexecuted_blocks=1 00:06:34.364 00:06:34.364 ' 00:06:34.364 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:34.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.364 --rc genhtml_branch_coverage=1 00:06:34.364 --rc genhtml_function_coverage=1 00:06:34.364 --rc genhtml_legend=1 00:06:34.364 --rc geninfo_all_blocks=1 00:06:34.364 --rc geninfo_unexecuted_blocks=1 00:06:34.364 00:06:34.364 ' 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61115 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61115 00:06:34.364 09:12:25 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:34.364 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 61115 ']' 00:06:34.364 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.364 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.364 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.364 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.364 09:12:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:34.364 [2024-10-08 09:12:26.000850] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:34.364 [2024-10-08 09:12:26.000974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61115 ] 00:06:34.621 [2024-10-08 09:12:26.144722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.878 [2024-10-08 09:12:26.353716] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.444 09:12:26 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.444 09:12:26 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:06:35.444 09:12:26 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:35.444 09:12:26 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:35.444 09:12:26 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:35.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:35.700 Waiting for block devices as requested 00:06:35.700 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.700 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.957 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:35.957 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:41.251 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:41.252 BYT; 00:06:41.252 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:41.252 BYT; 00:06:41.252 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:41.252 09:12:32 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:41.252 09:12:32 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:42.194 The operation has completed successfully. 00:06:42.194 09:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:43.133 The operation has completed successfully. 00:06:43.133 09:12:34 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:43.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.963 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.963 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.963 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.963 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:44.224 09:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:44.224 09:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.224 09:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.224 [] 00:06:44.224 09:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.224 09:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:44.224 09:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:44.224 09:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:44.224 09:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:44.224 09:12:35 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:44.224 09:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.224 09:12:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.567 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.567 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:44.567 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.567 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.567 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.567 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:44.567 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:44.567 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.567 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.567 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:44.567 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:44.568 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b66f6f57-17a1-40ac-ac80-5d7732e63819"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b66f6f57-17a1-40ac-ac80-5d7732e63819",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "122ac603-593f-494f-8a72-fad61a47d3ae"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "122ac603-593f-494f-8a72-fad61a47d3ae",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d55cef6d-86f9-43b3-8c9f-4eb415796058"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d55cef6d-86f9-43b3-8c9f-4eb415796058",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8b7dc60f-fb84-4799-a86b-5e2cd0d47dfd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8b7dc60f-fb84-4799-a86b-5e2cd0d47dfd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "52046906-6b03-4293-afc2-a01b01476e96"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "52046906-6b03-4293-afc2-a01b01476e96",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:44.568 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:44.568 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:44.568 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:44.568 09:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61115 00:06:44.568 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 61115 ']' 00:06:44.568 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 61115 00:06:44.568 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:06:44.568 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.568 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61115 00:06:44.568 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.568 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.568 killing process with pid 61115 00:06:44.568 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61115' 00:06:44.568 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 61115 00:06:44.568 09:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 61115 00:06:45.946 09:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:45.946 09:12:37 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:45.946 09:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:45.946 09:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.946 09:12:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.946 ************************************ 00:06:45.946 START TEST bdev_hello_world 00:06:45.946 ************************************ 00:06:45.946 09:12:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:45.946 [2024-10-08 09:12:37.525842] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:45.946 [2024-10-08 09:12:37.525945] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61736 ] 00:06:46.208 [2024-10-08 09:12:37.669889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.208 [2024-10-08 09:12:37.823865] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.778 [2024-10-08 09:12:38.324463] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:46.778 [2024-10-08 09:12:38.324510] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:46.778 [2024-10-08 09:12:38.324533] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:46.778 [2024-10-08 09:12:38.327022] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:46.778 [2024-10-08 09:12:38.327420] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:46.778 [2024-10-08 09:12:38.327448] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:46.778 [2024-10-08 09:12:38.327580] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:46.778 00:06:46.778 [2024-10-08 09:12:38.327608] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:47.753 00:06:47.753 real 0m1.619s 00:06:47.753 user 0m1.347s 00:06:47.753 sys 0m0.163s 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:47.753 ************************************ 00:06:47.753 END TEST bdev_hello_world 00:06:47.753 ************************************ 00:06:47.753 09:12:39 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:47.753 09:12:39 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:47.753 09:12:39 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.753 09:12:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.753 ************************************ 00:06:47.753 START TEST bdev_bounds 00:06:47.753 ************************************ 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61773 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.753 Process bdevio pid: 61773 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61773' 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61773 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61773 ']' 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:47.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.753 09:12:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:47.753 [2024-10-08 09:12:39.189142] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:47.753 [2024-10-08 09:12:39.189248] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61773 ] 00:06:47.753 [2024-10-08 09:12:39.333847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.012 [2024-10-08 09:12:39.492565] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.012 [2024-10-08 09:12:39.492879] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.012 [2024-10-08 09:12:39.492903] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.576 09:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.576 09:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:48.576 09:12:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:48.576 I/O targets: 00:06:48.576 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:48.576 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:48.576 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:48.576 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:48.576 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:48.576 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:48.576 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:48.576 00:06:48.576 00:06:48.576 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.576 http://cunit.sourceforge.net/ 00:06:48.576 00:06:48.576 00:06:48.576 Suite: bdevio tests on: Nvme3n1 00:06:48.576 Test: blockdev write read block ...passed 00:06:48.576 Test: blockdev write zeroes read block ...passed 00:06:48.576 Test: blockdev write zeroes read no split ...passed 00:06:48.576 Test: blockdev write zeroes read split ...passed 00:06:48.576 Test: blockdev write zeroes read split partial ...passed 00:06:48.576 Test: blockdev reset ...[2024-10-08 09:12:40.226159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:48.576 [2024-10-08 09:12:40.228949] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:48.576 passed 00:06:48.576 Test: blockdev write read 8 blocks ...passed 00:06:48.576 Test: blockdev write read size > 128k ...passed 00:06:48.576 Test: blockdev write read invalid size ...passed 00:06:48.577 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.577 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.577 Test: blockdev write read max offset ...passed 00:06:48.577 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.577 Test: blockdev writev readv 8 blocks ...passed 00:06:48.577 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.577 Test: blockdev writev readv block ...passed 00:06:48.577 Test: blockdev writev readv size > 128k ...passed 00:06:48.577 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.577 Test: blockdev comparev and writev ...[2024-10-08 09:12:40.234305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:48.577 Test: blockdev nvme passthru rw ...passed 00:06:48.577 Test: blockdev nvme passthru vendor specific ...passed 00:06:48.577 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2bbe06000 len:0x1000 00:06:48.577 [2024-10-08 09:12:40.234464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.577 [2024-10-08 09:12:40.234927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:48.577 [2024-10-08 09:12:40.234952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:48.577 passed 00:06:48.577 Test: blockdev copy ...passed 00:06:48.577 Suite: bdevio tests on: Nvme2n3 00:06:48.577 Test: blockdev write read block ...passed 00:06:48.577 Test: blockdev write zeroes read block ...passed 00:06:48.577 Test: blockdev write zeroes read no split ...passed 00:06:48.835 Test: blockdev write zeroes read split ...passed 00:06:48.835 Test: blockdev write zeroes read split partial ...passed 00:06:48.835 Test: blockdev reset ...[2024-10-08 09:12:40.280635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:48.835 [2024-10-08 09:12:40.283528] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:48.835 passed 00:06:48.835 Test: blockdev write read 8 blocks ...passed 00:06:48.835 Test: blockdev write read size > 128k ...passed 00:06:48.835 Test: blockdev write read invalid size ...passed 00:06:48.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.835 Test: blockdev write read max offset ...passed 00:06:48.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.835 Test: blockdev writev readv 8 blocks ...passed 00:06:48.835 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.835 Test: blockdev writev readv block ...passed 00:06:48.835 Test: blockdev writev readv size > 128k ...passed 00:06:48.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.835 Test: blockdev comparev and writev ...[2024-10-08 09:12:40.289796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba63c000 len:0x1000 00:06:48.835 [2024-10-08 09:12:40.289938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.835 passed 00:06:48.835 Test: blockdev nvme passthru rw ...passed 00:06:48.835 Test: blockdev nvme passthru vendor specific ...[2024-10-08 09:12:40.291291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:48.835 [2024-10-08 09:12:40.291411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:48.835 passed 00:06:48.835 Test: blockdev nvme admin passthru ...passed 00:06:48.835 Test: blockdev copy ...passed 00:06:48.835 Suite: bdevio tests on: Nvme2n2 00:06:48.835 Test: blockdev write read block ...passed 00:06:48.835 Test: blockdev write zeroes read block ...passed 00:06:48.835 Test: blockdev write zeroes read no split ...passed 00:06:48.835 Test: blockdev write zeroes read split ...passed 00:06:48.835 Test: blockdev write zeroes read split partial ...passed 00:06:48.835 Test: blockdev reset ...[2024-10-08 09:12:40.350466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:48.835 [2024-10-08 09:12:40.353048] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:48.835 passed 00:06:48.835 Test: blockdev write read 8 blocks ...passed 00:06:48.835 Test: blockdev write read size > 128k ...passed 00:06:48.835 Test: blockdev write read invalid size ...passed 00:06:48.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.835 Test: blockdev write read max offset ...passed 00:06:48.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.835 Test: blockdev writev readv 8 blocks ...passed 00:06:48.835 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.835 Test: blockdev writev readv block ...passed 00:06:48.835 Test: blockdev writev readv size > 128k ...passed 00:06:48.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.835 Test: blockdev comparev and writev ...[2024-10-08 09:12:40.362946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba636000 len:0x1000 00:06:48.835 [2024-10-08 09:12:40.363078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.835 passed 00:06:48.835 Test: blockdev nvme passthru rw ...passed 00:06:48.835 Test: blockdev nvme passthru vendor specific ...[2024-10-08 09:12:40.363936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:48.835 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:48.835 [2024-10-08 09:12:40.364114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:48.835 passed 00:06:48.835 Test: blockdev copy ...passed 00:06:48.835 Suite: bdevio tests on: Nvme2n1 00:06:48.835 Test: blockdev write read block ...passed 00:06:48.835 Test: blockdev write zeroes read block ...passed 00:06:48.835 Test: blockdev write zeroes read no split ...passed 00:06:48.835 Test: blockdev write zeroes read split ...passed 00:06:48.835 Test: blockdev write zeroes read split partial ...passed 00:06:48.835 Test: blockdev reset ...[2024-10-08 09:12:40.418462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:48.835 [2024-10-08 09:12:40.421099] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:48.835 passed 00:06:48.835 Test: blockdev write read 8 blocks ...passed 00:06:48.835 Test: blockdev write read size > 128k ...passed 00:06:48.835 Test: blockdev write read invalid size ...passed 00:06:48.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.835 Test: blockdev write read max offset ...passed 00:06:48.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.835 Test: blockdev writev readv 8 blocks ...passed 00:06:48.835 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.835 Test: blockdev writev readv block ...passed 00:06:48.835 Test: blockdev writev readv size > 128k ...passed 00:06:48.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.835 Test: blockdev comparev and writev ...[2024-10-08 09:12:40.427317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba632000 len:0x1000 00:06:48.835 [2024-10-08 09:12:40.427452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.835 passed 00:06:48.835 Test: blockdev nvme passthru rw ...passed 00:06:48.835 Test: blockdev nvme passthru vendor specific ...[2024-10-08 09:12:40.428154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:48.835 [2024-10-08 09:12:40.428249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:48.835 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:48.835 passed 00:06:48.835 Test: blockdev copy ...passed 00:06:48.835 Suite: bdevio tests on: Nvme1n1p2 00:06:48.835 Test: blockdev write read block ...passed 00:06:48.835 Test: blockdev write zeroes read block ...passed 00:06:48.835 Test: blockdev write zeroes read no split ...passed 00:06:48.835 Test: blockdev write zeroes read split ...passed 00:06:48.835 Test: blockdev write zeroes read split partial ...passed 00:06:48.835 Test: blockdev reset ...[2024-10-08 09:12:40.491837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:48.835 [2024-10-08 09:12:40.494399] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:48.835 passed 00:06:48.835 Test: blockdev write read 8 blocks ...passed 00:06:48.835 Test: blockdev write read size > 128k ...passed 00:06:48.835 Test: blockdev write read invalid size ...passed 00:06:48.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:48.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:48.835 Test: blockdev write read max offset ...passed 00:06:48.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:48.835 Test: blockdev writev readv 8 blocks ...passed 00:06:48.835 Test: blockdev writev readv 30 x 1block ...passed 00:06:48.835 Test: blockdev writev readv block ...passed 00:06:48.835 Test: blockdev writev readv size > 128k ...passed 00:06:48.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:48.835 Test: blockdev comparev and writev ...[2024-10-08 09:12:40.501487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ba62e000 len:0x1000 00:06:48.835 [2024-10-08 09:12:40.501613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:48.835 passed 00:06:48.835 Test: blockdev nvme passthru rw ...passed 00:06:48.835 Test: blockdev nvme passthru vendor specific ...passed 00:06:48.835 Test: blockdev nvme admin passthru ...passed 00:06:48.835 Test: blockdev copy ...passed 00:06:48.835 Suite: bdevio tests on: Nvme1n1p1 00:06:48.835 Test: blockdev write read block ...passed 00:06:48.835 Test: blockdev write zeroes read block ...passed 00:06:48.835 Test: blockdev write zeroes read no split ...passed 00:06:49.093 Test: blockdev write zeroes read split ...passed 00:06:49.093 Test: blockdev write zeroes read split partial ...passed 00:06:49.093 Test: blockdev reset ...[2024-10-08 09:12:40.545806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:49.093 [2024-10-08 09:12:40.548333] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:49.093 passed 00:06:49.093 Test: blockdev write read 8 blocks ...passed 00:06:49.093 Test: blockdev write read size > 128k ...passed 00:06:49.093 Test: blockdev write read invalid size ...passed 00:06:49.093 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:49.093 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:49.093 Test: blockdev write read max offset ...passed 00:06:49.093 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:49.093 Test: blockdev writev readv 8 blocks ...passed 00:06:49.093 Test: blockdev writev readv 30 x 1block ...passed 00:06:49.093 Test: blockdev writev readv block ...passed 00:06:49.093 Test: blockdev writev readv size > 128k ...passed 00:06:49.093 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:49.093 Test: blockdev comparev and writev ...[2024-10-08 09:12:40.554756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2a9e0e000 len:0x1000 00:06:49.093 [2024-10-08 09:12:40.554803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:49.094 passed 00:06:49.094 Test: blockdev nvme passthru rw ...passed 00:06:49.094 Test: blockdev nvme passthru vendor specific ...passed 00:06:49.094 Test: blockdev nvme admin passthru ...passed 00:06:49.094 Test: blockdev copy ...passed 00:06:49.094 Suite: bdevio tests on: Nvme0n1 00:06:49.094 Test: blockdev write read block ...passed 00:06:49.094 Test: blockdev write zeroes read block ...passed 00:06:49.094 Test: blockdev write zeroes read no split ...passed 00:06:49.094 Test: blockdev write zeroes read split ...passed 00:06:49.094 Test: blockdev write zeroes read split partial ...passed 00:06:49.094 Test: blockdev reset ...[2024-10-08 09:12:40.597521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:49.094 [2024-10-08 09:12:40.599943] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:49.094 passed 00:06:49.094 Test: blockdev write read 8 blocks ...passed 00:06:49.094 Test: blockdev write read size > 128k ...passed 00:06:49.094 Test: blockdev write read invalid size ...passed 00:06:49.094 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:49.094 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:49.094 Test: blockdev write read max offset ...passed 00:06:49.094 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:49.094 Test: blockdev writev readv 8 blocks ...passed 00:06:49.094 Test: blockdev writev readv 30 x 1block ...passed 00:06:49.094 Test: blockdev writev readv block ...passed 00:06:49.094 Test: blockdev writev readv size > 128k ...passed 00:06:49.094 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:49.094 Test: blockdev comparev and writev ...passed 00:06:49.094 Test: blockdev nvme passthru rw ...[2024-10-08 09:12:40.605103] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:49.094 separate metadata which is not supported yet. 00:06:49.094 passed 00:06:49.094 Test: blockdev nvme passthru vendor specific ...passed 00:06:49.094 Test: blockdev nvme admin passthru ...[2024-10-08 09:12:40.605590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:49.094 [2024-10-08 09:12:40.605624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:49.094 passed 00:06:49.094 Test: blockdev copy ...passed 00:06:49.094 00:06:49.094 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.094 suites 7 7 n/a 0 0 00:06:49.094 tests 161 161 161 0 0 00:06:49.094 asserts 1025 1025 1025 0 n/a 00:06:49.094 00:06:49.094 Elapsed time = 1.148 seconds 00:06:49.094 0 00:06:49.094 09:12:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61773 00:06:49.094 09:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61773 ']' 00:06:49.094 09:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61773 00:06:49.094 09:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:49.094 09:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.094 09:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61773 00:06:49.094 killing process with pid 61773 00:06:49.094 09:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.094 09:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.094 09:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61773' 00:06:49.094 09:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61773 00:06:49.094 09:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61773 00:06:50.037 ************************************ 00:06:50.037 END TEST bdev_bounds 00:06:50.037 ************************************ 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:50.037 00:06:50.037 real 0m2.234s 00:06:50.037 user 0m5.647s 00:06:50.037 sys 0m0.265s 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:50.037 09:12:41 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:50.037 09:12:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:50.037 09:12:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.037 09:12:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.037 ************************************ 00:06:50.037 START TEST bdev_nbd 00:06:50.037 ************************************ 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:50.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61828 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61828 /var/tmp/spdk-nbd.sock 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61828 ']' 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.037 09:12:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:50.037 [2024-10-08 09:12:41.477044] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:06:50.037 [2024-10-08 09:12:41.477319] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.037 [2024-10-08 09:12:41.621099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.299 [2024-10-08 09:12:41.808283] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:50.873 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.195 1+0 records in 00:06:51.195 1+0 records out 00:06:51.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367056 s, 11.2 MB/s 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:51.195 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.196 1+0 records in 00:06:51.196 1+0 records out 00:06:51.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469185 s, 8.7 MB/s 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.196 09:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.457 1+0 records in 00:06:51.457 1+0 records out 00:06:51.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00095056 s, 4.3 MB/s 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.457 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.719 1+0 records in 00:06:51.719 1+0 records out 00:06:51.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447756 s, 9.1 MB/s 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.719 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.981 1+0 records in 00:06:51.981 1+0 records out 00:06:51.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455146 s, 9.0 MB/s 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:51.981 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.242 1+0 records in 00:06:52.242 1+0 records out 00:06:52.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471066 s, 8.7 MB/s 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:52.242 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.501 1+0 records in 00:06:52.501 1+0 records out 00:06:52.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494094 s, 8.3 MB/s 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:52.501 09:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.501 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd0", 00:06:52.501 "bdev_name": "Nvme0n1" 00:06:52.501 }, 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd1", 00:06:52.501 "bdev_name": "Nvme1n1p1" 00:06:52.501 }, 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd2", 00:06:52.501 "bdev_name": "Nvme1n1p2" 00:06:52.501 }, 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd3", 00:06:52.501 "bdev_name": "Nvme2n1" 00:06:52.501 }, 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd4", 00:06:52.501 "bdev_name": "Nvme2n2" 00:06:52.501 }, 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd5", 00:06:52.501 "bdev_name": "Nvme2n3" 00:06:52.501 }, 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd6", 00:06:52.501 "bdev_name": "Nvme3n1" 00:06:52.501 } 00:06:52.501 ]' 00:06:52.501 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:52.501 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd0", 00:06:52.501 "bdev_name": "Nvme0n1" 00:06:52.501 }, 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd1", 00:06:52.501 "bdev_name": "Nvme1n1p1" 00:06:52.501 }, 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd2", 00:06:52.501 "bdev_name": "Nvme1n1p2" 00:06:52.501 }, 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd3", 00:06:52.501 "bdev_name": "Nvme2n1" 00:06:52.501 }, 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd4", 00:06:52.501 "bdev_name": "Nvme2n2" 00:06:52.501 }, 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd5", 00:06:52.501 "bdev_name": "Nvme2n3" 00:06:52.501 }, 00:06:52.501 { 00:06:52.501 "nbd_device": "/dev/nbd6", 00:06:52.501 "bdev_name": "Nvme3n1" 00:06:52.501 } 00:06:52.501 ]' 00:06:52.501 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.760 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.018 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.018 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.018 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.018 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.018 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.018 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.018 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.018 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.018 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.018 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:53.278 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:53.278 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:53.278 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:53.278 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.278 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.278 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:53.278 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.278 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.278 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.278 09:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:53.539 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:53.539 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:53.539 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:53.539 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.539 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.539 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:53.539 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.539 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.539 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.539 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.798 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:54.060 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:54.060 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:54.060 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:54.060 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.060 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.060 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:54.060 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.060 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.060 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.060 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.060 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.324 09:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:54.624 /dev/nbd0 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.624 1+0 records in 00:06:54.624 1+0 records out 00:06:54.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300797 s, 13.6 MB/s 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.624 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:54.624 /dev/nbd1 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.885 1+0 records in 00:06:54.885 1+0 records out 00:06:54.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407709 s, 10.0 MB/s 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:54.885 /dev/nbd10 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.885 1+0 records in 00:06:54.885 1+0 records out 00:06:54.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364368 s, 11.2 MB/s 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:54.885 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:55.146 /dev/nbd11 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.146 1+0 records in 00:06:55.146 1+0 records out 00:06:55.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381193 s, 10.7 MB/s 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:55.146 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:55.147 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.147 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.147 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:55.409 /dev/nbd12 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.409 1+0 records in 00:06:55.409 1+0 records out 00:06:55.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414418 s, 9.9 MB/s 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.409 09:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:55.670 /dev/nbd13 00:06:55.670 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:55.670 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:55.670 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:55.670 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:55.670 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:55.670 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:55.670 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:55.670 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:55.670 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:55.670 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:55.670 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.670 1+0 records in 00:06:55.670 1+0 records out 00:06:55.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396539 s, 10.3 MB/s 00:06:55.671 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.671 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:55.671 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.671 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:55.671 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:55.671 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.671 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.671 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:55.932 /dev/nbd14 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.932 1+0 records in 00:06:55.932 1+0 records out 00:06:55.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374127 s, 10.9 MB/s 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.932 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.932 { 00:06:55.932 "nbd_device": "/dev/nbd0", 00:06:55.932 "bdev_name": "Nvme0n1" 00:06:55.932 }, 00:06:55.932 { 00:06:55.932 "nbd_device": "/dev/nbd1", 00:06:55.932 "bdev_name": "Nvme1n1p1" 00:06:55.932 }, 00:06:55.932 { 00:06:55.932 "nbd_device": "/dev/nbd10", 00:06:55.933 "bdev_name": "Nvme1n1p2" 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "nbd_device": "/dev/nbd11", 00:06:55.933 "bdev_name": "Nvme2n1" 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "nbd_device": "/dev/nbd12", 00:06:55.933 "bdev_name": "Nvme2n2" 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "nbd_device": "/dev/nbd13", 00:06:55.933 "bdev_name": "Nvme2n3" 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "nbd_device": "/dev/nbd14", 00:06:55.933 "bdev_name": "Nvme3n1" 00:06:55.933 } 00:06:55.933 ]' 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.933 { 00:06:55.933 "nbd_device": "/dev/nbd0", 00:06:55.933 "bdev_name": "Nvme0n1" 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "nbd_device": "/dev/nbd1", 00:06:55.933 "bdev_name": "Nvme1n1p1" 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "nbd_device": "/dev/nbd10", 00:06:55.933 "bdev_name": "Nvme1n1p2" 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "nbd_device": "/dev/nbd11", 00:06:55.933 "bdev_name": "Nvme2n1" 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "nbd_device": "/dev/nbd12", 00:06:55.933 "bdev_name": "Nvme2n2" 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "nbd_device": "/dev/nbd13", 00:06:55.933 "bdev_name": "Nvme2n3" 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "nbd_device": "/dev/nbd14", 00:06:55.933 "bdev_name": "Nvme3n1" 00:06:55.933 } 00:06:55.933 ]' 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:55.933 /dev/nbd1 00:06:55.933 /dev/nbd10 00:06:55.933 /dev/nbd11 00:06:55.933 /dev/nbd12 00:06:55.933 /dev/nbd13 00:06:55.933 /dev/nbd14' 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:55.933 /dev/nbd1 00:06:55.933 /dev/nbd10 00:06:55.933 /dev/nbd11 00:06:55.933 /dev/nbd12 00:06:55.933 /dev/nbd13 00:06:55.933 /dev/nbd14' 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:55.933 256+0 records in 00:06:55.933 256+0 records out 00:06:55.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00782455 s, 134 MB/s 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.933 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.194 256+0 records in 00:06:56.194 256+0 records out 00:06:56.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0686252 s, 15.3 MB/s 00:06:56.194 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.194 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.194 256+0 records in 00:06:56.194 256+0 records out 00:06:56.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0752806 s, 13.9 MB/s 00:06:56.194 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.194 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:56.194 256+0 records in 00:06:56.194 256+0 records out 00:06:56.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.073816 s, 14.2 MB/s 00:06:56.194 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.194 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:56.456 256+0 records in 00:06:56.457 256+0 records out 00:06:56.457 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0786675 s, 13.3 MB/s 00:06:56.457 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.457 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:56.457 256+0 records in 00:06:56.457 256+0 records out 00:06:56.457 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0731649 s, 14.3 MB/s 00:06:56.457 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.457 09:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:56.457 256+0 records in 00:06:56.457 256+0 records out 00:06:56.457 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0806508 s, 13.0 MB/s 00:06:56.457 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.457 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:56.717 256+0 records in 00:06:56.717 256+0 records out 00:06:56.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0858399 s, 12.2 MB/s 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.717 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.977 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:57.238 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:57.238 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:57.238 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:57.238 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.238 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.238 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:57.238 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.238 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.238 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.238 09:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:57.500 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:57.500 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:57.500 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:57.500 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.500 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.500 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:57.500 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.500 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.500 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.500 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:57.762 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:57.762 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:57.762 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:57.762 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.762 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.762 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:57.762 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.762 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.762 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.762 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.023 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:58.285 09:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:58.546 malloc_lvol_verify 00:06:58.546 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:58.807 cb0e4283-da97-4b91-bc2b-50e4422e6ee1 00:06:58.807 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:58.807 a68dd2cd-ee5e-4484-b826-07db5b0577b8 00:06:58.807 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:59.070 /dev/nbd0 00:06:59.070 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:59.070 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:59.070 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:59.070 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:59.070 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:59.070 mke2fs 1.47.0 (5-Feb-2023) 00:06:59.070 Discarding device blocks: 0/4096 done 00:06:59.070 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:59.070 00:06:59.070 Allocating group tables: 0/1 done 00:06:59.070 Writing inode tables: 0/1 done 00:06:59.071 Creating journal (1024 blocks): done 00:06:59.071 Writing superblocks and filesystem accounting information: 0/1 done 00:06:59.071 00:06:59.071 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:59.071 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.071 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:59.071 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.071 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:59.071 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.071 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61828 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61828 ']' 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61828 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61828 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61828' 00:06:59.331 killing process with pid 61828 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61828 00:06:59.331 09:12:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61828 00:06:59.903 09:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:59.903 00:06:59.903 real 0m10.119s 00:06:59.903 user 0m14.382s 00:06:59.903 sys 0m3.331s 00:06:59.903 09:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.903 09:12:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:59.903 ************************************ 00:06:59.903 END TEST bdev_nbd 00:06:59.903 ************************************ 00:06:59.903 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:59.903 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:06:59.903 skipping fio tests on NVMe due to multi-ns failures. 00:06:59.903 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:06:59.903 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:59.903 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:59.903 09:12:51 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:59.903 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:59.903 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.903 09:12:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:59.903 ************************************ 00:06:59.903 START TEST bdev_verify 00:06:59.903 ************************************ 00:06:59.903 09:12:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:00.165 [2024-10-08 09:12:51.633901] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:00.165 [2024-10-08 09:12:51.634030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62234 ] 00:07:00.165 [2024-10-08 09:12:51.782151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.426 [2024-10-08 09:12:51.940965] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.426 [2024-10-08 09:12:51.941183] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.999 Running I/O for 5 seconds... 00:07:03.319 24832.00 IOPS, 97.00 MiB/s [2024-10-08T09:12:55.941Z] 24640.00 IOPS, 96.25 MiB/s [2024-10-08T09:12:56.885Z] 24576.00 IOPS, 96.00 MiB/s [2024-10-08T09:12:57.830Z] 24528.00 IOPS, 95.81 MiB/s [2024-10-08T09:12:57.830Z] 24729.60 IOPS, 96.60 MiB/s 00:07:06.147 Latency(us) 00:07:06.147 [2024-10-08T09:12:57.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.147 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x0 length 0xbd0bd 00:07:06.147 Nvme0n1 : 5.06 1721.51 6.72 0.00 0.00 74093.41 15325.34 82676.18 00:07:06.147 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:06.147 Nvme0n1 : 5.06 1783.06 6.97 0.00 0.00 71470.38 8822.15 77836.60 00:07:06.147 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x0 length 0x4ff80 00:07:06.147 Nvme1n1p1 : 5.06 1720.97 6.72 0.00 0.00 73908.04 17241.01 69367.34 00:07:06.147 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:06.147 Nvme1n1p1 : 5.08 1788.44 6.99 0.00 0.00 71326.60 15224.52 70577.23 00:07:06.147 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x0 length 0x4ff7f 00:07:06.147 Nvme1n1p2 : 5.06 1719.85 6.72 0.00 0.00 73761.75 17845.96 63317.86 00:07:06.147 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:06.147 Nvme1n1p2 : 5.08 1787.92 6.98 0.00 0.00 71151.95 13611.32 65737.65 00:07:06.147 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x0 length 0x80000 00:07:06.147 Nvme2n1 : 5.08 1725.89 6.74 0.00 0.00 73392.18 4159.02 62914.56 00:07:06.147 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x80000 length 0x80000 00:07:06.147 Nvme2n1 : 5.08 1787.42 6.98 0.00 0.00 71017.31 14014.62 61704.66 00:07:06.147 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x0 length 0x80000 00:07:06.147 Nvme2n2 : 5.09 1734.35 6.77 0.00 0.00 72985.31 8015.56 64124.46 00:07:06.147 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x80000 length 0x80000 00:07:06.147 Nvme2n2 : 5.09 1786.36 6.98 0.00 0.00 70929.16 14922.04 64931.05 00:07:06.147 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x0 length 0x80000 00:07:06.147 Nvme2n3 : 5.09 1733.90 6.77 0.00 0.00 72878.31 8267.62 65737.65 00:07:06.147 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x80000 length 0x80000 00:07:06.147 Nvme2n3 : 5.09 1785.90 6.98 0.00 0.00 70823.45 15325.34 69367.34 00:07:06.147 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x0 length 0x20000 00:07:06.147 Nvme3n1 : 5.10 1732.87 6.77 0.00 0.00 72808.51 10334.52 66544.25 00:07:06.147 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:06.147 Verification LBA range: start 0x20000 length 0x20000 00:07:06.147 Nvme3n1 : 5.09 1785.41 6.97 0.00 0.00 70722.07 9477.51 72190.42 00:07:06.147 [2024-10-08T09:12:57.830Z] =================================================================================================================== 00:07:06.147 [2024-10-08T09:12:57.830Z] Total : 24593.85 96.07 0.00 0.00 72211.28 4159.02 82676.18 00:07:07.534 00:07:07.534 real 0m7.345s 00:07:07.534 user 0m13.622s 00:07:07.534 sys 0m0.224s 00:07:07.534 09:12:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.534 09:12:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:07.534 ************************************ 00:07:07.534 END TEST bdev_verify 00:07:07.534 ************************************ 00:07:07.534 09:12:58 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:07.534 09:12:58 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:07.534 09:12:58 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.534 09:12:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.534 ************************************ 00:07:07.534 START TEST bdev_verify_big_io 00:07:07.534 ************************************ 00:07:07.534 09:12:58 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:07.534 [2024-10-08 09:12:59.028942] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:07.534 [2024-10-08 09:12:59.029074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62332 ] 00:07:07.534 [2024-10-08 09:12:59.177240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.794 [2024-10-08 09:12:59.380069] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.794 [2024-10-08 09:12:59.380261] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.734 Running I/O for 5 seconds... 00:07:13.418 1852.00 IOPS, 115.75 MiB/s [2024-10-08T09:13:06.513Z] 2579.00 IOPS, 161.19 MiB/s [2024-10-08T09:13:06.513Z] 3494.33 IOPS, 218.40 MiB/s 00:07:14.830 Latency(us) 00:07:14.830 [2024-10-08T09:13:06.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.830 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.830 Verification LBA range: start 0x0 length 0xbd0b 00:07:14.830 Nvme0n1 : 5.84 131.46 8.22 0.00 0.00 909407.90 19862.45 1167952.34 00:07:14.830 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.830 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:14.830 Nvme0n1 : 5.70 122.14 7.63 0.00 0.00 985240.87 11695.66 1284102.30 00:07:14.830 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.830 Verification LBA range: start 0x0 length 0x4ff8 00:07:14.830 Nvme1n1p1 : 5.84 135.51 8.47 0.00 0.00 878616.00 81062.99 974369.08 00:07:14.830 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.830 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:14.830 Nvme1n1p1 : 5.70 125.05 7.82 0.00 0.00 945428.03 93161.94 1090519.04 00:07:14.830 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.830 Verification LBA range: start 0x0 length 0x4ff7 00:07:14.830 Nvme1n1p2 : 6.00 123.10 7.69 0.00 0.00 935861.18 88725.66 1477685.56 00:07:14.830 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.830 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:14.830 Nvme1n1p2 : 5.84 128.39 8.02 0.00 0.00 893436.76 125829.12 1058255.16 00:07:14.830 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.830 Verification LBA range: start 0x0 length 0x8000 00:07:14.830 Nvme2n1 : 6.00 143.23 8.95 0.00 0.00 789245.93 55655.19 851766.35 00:07:14.830 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.830 Verification LBA range: start 0x8000 length 0x8000 00:07:14.830 Nvme2n1 : 6.07 130.06 8.13 0.00 0.00 853317.77 69770.63 1522854.99 00:07:14.830 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.831 Verification LBA range: start 0x0 length 0x8000 00:07:14.831 Nvme2n2 : 6.05 148.04 9.25 0.00 0.00 741622.21 51017.26 787238.60 00:07:14.831 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.831 Verification LBA range: start 0x8000 length 0x8000 00:07:14.831 Nvme2n2 : 6.09 134.11 8.38 0.00 0.00 804904.50 77836.60 1548666.09 00:07:14.831 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.831 Verification LBA range: start 0x0 length 0x8000 00:07:14.831 Nvme2n3 : 6.09 152.67 9.54 0.00 0.00 701250.92 30650.68 803370.54 00:07:14.831 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.831 Verification LBA range: start 0x8000 length 0x8000 00:07:14.831 Nvme2n3 : 6.13 144.05 9.00 0.00 0.00 728965.17 16131.94 1593835.52 00:07:14.831 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:14.831 Verification LBA range: start 0x0 length 0x2000 00:07:14.831 Nvme3n1 : 6.10 163.39 10.21 0.00 0.00 642062.24 3856.54 825955.25 00:07:14.831 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:14.831 Verification LBA range: start 0x2000 length 0x2000 00:07:14.831 Nvme3n1 : 6.18 183.80 11.49 0.00 0.00 560461.98 460.01 1193763.45 00:07:14.831 [2024-10-08T09:13:06.514Z] =================================================================================================================== 00:07:14.831 [2024-10-08T09:13:06.514Z] Total : 1965.01 122.81 0.00 0.00 795724.13 460.01 1593835.52 00:07:16.746 00:07:16.746 real 0m8.992s 00:07:16.746 user 0m16.831s 00:07:16.746 sys 0m0.246s 00:07:16.746 09:13:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.746 09:13:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:16.746 ************************************ 00:07:16.746 END TEST bdev_verify_big_io 00:07:16.746 ************************************ 00:07:16.746 09:13:07 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:16.746 09:13:07 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:16.746 09:13:07 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.746 09:13:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.746 ************************************ 00:07:16.746 START TEST bdev_write_zeroes 00:07:16.746 ************************************ 00:07:16.746 09:13:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:16.746 [2024-10-08 09:13:08.064875] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:16.746 [2024-10-08 09:13:08.064998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62447 ] 00:07:16.746 [2024-10-08 09:13:08.207663] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.746 [2024-10-08 09:13:08.394427] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.318 Running I/O for 1 seconds... 00:07:18.746 53972.00 IOPS, 210.83 MiB/s 00:07:18.746 Latency(us) 00:07:18.746 [2024-10-08T09:13:10.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.746 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.746 Nvme0n1 : 1.02 7502.05 29.30 0.00 0.00 17023.77 9779.99 128248.91 00:07:18.746 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.746 Nvme1n1p1 : 1.03 7864.18 30.72 0.00 0.00 16216.05 9477.51 88322.36 00:07:18.746 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.746 Nvme1n1p2 : 1.03 7729.61 30.19 0.00 0.00 16457.02 10435.35 93161.94 00:07:18.746 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.746 Nvme2n1 : 1.03 7720.72 30.16 0.00 0.00 16411.75 10637.00 91145.45 00:07:18.746 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.746 Nvme2n2 : 1.03 7649.67 29.88 0.00 0.00 16541.48 10536.17 90338.86 00:07:18.746 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.746 Nvme2n3 : 1.03 7657.47 29.91 0.00 0.00 16504.97 10284.11 90742.15 00:07:18.746 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:18.746 Nvme3n1 : 1.03 7694.28 30.06 0.00 0.00 16402.22 10586.58 89935.56 00:07:18.746 [2024-10-08T09:13:10.429Z] =================================================================================================================== 00:07:18.746 [2024-10-08T09:13:10.429Z] Total : 53817.99 210.23 0.00 0.00 16505.11 9477.51 128248.91 00:07:19.319 00:07:19.319 real 0m2.864s 00:07:19.319 user 0m2.560s 00:07:19.319 sys 0m0.188s 00:07:19.319 09:13:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.319 09:13:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:19.319 ************************************ 00:07:19.319 END TEST bdev_write_zeroes 00:07:19.319 ************************************ 00:07:19.319 09:13:10 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.319 09:13:10 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:19.319 09:13:10 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.319 09:13:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.319 ************************************ 00:07:19.319 START TEST bdev_json_nonenclosed 00:07:19.319 ************************************ 00:07:19.319 09:13:10 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.319 [2024-10-08 09:13:10.970525] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:19.319 [2024-10-08 09:13:10.970656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62500 ] 00:07:19.580 [2024-10-08 09:13:11.112367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.841 [2024-10-08 09:13:11.297529] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.842 [2024-10-08 09:13:11.297605] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:19.842 [2024-10-08 09:13:11.297622] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:19.842 [2024-10-08 09:13:11.297631] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.103 00:07:20.103 real 0m0.688s 00:07:20.103 user 0m0.474s 00:07:20.103 sys 0m0.105s 00:07:20.103 09:13:11 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.103 09:13:11 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:20.103 ************************************ 00:07:20.103 END TEST bdev_json_nonenclosed 00:07:20.103 ************************************ 00:07:20.103 09:13:11 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.103 09:13:11 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:20.103 09:13:11 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.103 09:13:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.103 ************************************ 00:07:20.103 START TEST bdev_json_nonarray 00:07:20.103 ************************************ 00:07:20.103 09:13:11 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.103 [2024-10-08 09:13:11.682261] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:20.103 [2024-10-08 09:13:11.682363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62531 ] 00:07:20.364 [2024-10-08 09:13:11.828065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.364 [2024-10-08 09:13:12.014444] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.364 [2024-10-08 09:13:12.014536] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:20.364 [2024-10-08 09:13:12.014552] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:20.364 [2024-10-08 09:13:12.014562] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.937 00:07:20.937 real 0m0.682s 00:07:20.937 user 0m0.475s 00:07:20.937 sys 0m0.102s 00:07:20.937 09:13:12 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.937 ************************************ 00:07:20.937 END TEST bdev_json_nonarray 00:07:20.937 ************************************ 00:07:20.937 09:13:12 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:20.937 09:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:20.937 09:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:20.937 09:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:20.937 09:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.937 09:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.937 09:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.937 ************************************ 00:07:20.937 START TEST bdev_gpt_uuid 00:07:20.937 ************************************ 00:07:20.937 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:20.937 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:20.937 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:20.937 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62556 00:07:20.937 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:20.938 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62556 00:07:20.938 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62556 ']' 00:07:20.938 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.938 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.938 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.938 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.938 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:20.938 09:13:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:20.938 [2024-10-08 09:13:12.454554] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:20.938 [2024-10-08 09:13:12.454712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62556 ] 00:07:20.938 [2024-10-08 09:13:12.603852] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.199 [2024-10-08 09:13:12.822459] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.770 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.770 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:21.770 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:21.770 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.770 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:22.342 Some configs were skipped because the RPC state that can call them passed over. 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:22.342 { 00:07:22.342 "name": "Nvme1n1p1", 00:07:22.342 "aliases": [ 00:07:22.342 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:22.342 ], 00:07:22.342 "product_name": "GPT Disk", 00:07:22.342 "block_size": 4096, 00:07:22.342 "num_blocks": 655104, 00:07:22.342 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:22.342 "assigned_rate_limits": { 00:07:22.342 "rw_ios_per_sec": 0, 00:07:22.342 "rw_mbytes_per_sec": 0, 00:07:22.342 "r_mbytes_per_sec": 0, 00:07:22.342 "w_mbytes_per_sec": 0 00:07:22.342 }, 00:07:22.342 "claimed": false, 00:07:22.342 "zoned": false, 00:07:22.342 "supported_io_types": { 00:07:22.342 "read": true, 00:07:22.342 "write": true, 00:07:22.342 "unmap": true, 00:07:22.342 "flush": true, 00:07:22.342 "reset": true, 00:07:22.342 "nvme_admin": false, 00:07:22.342 "nvme_io": false, 00:07:22.342 "nvme_io_md": false, 00:07:22.342 "write_zeroes": true, 00:07:22.342 "zcopy": false, 00:07:22.342 "get_zone_info": false, 00:07:22.342 "zone_management": false, 00:07:22.342 "zone_append": false, 00:07:22.342 "compare": true, 00:07:22.342 "compare_and_write": false, 00:07:22.342 "abort": true, 00:07:22.342 "seek_hole": false, 00:07:22.342 "seek_data": false, 00:07:22.342 "copy": true, 00:07:22.342 "nvme_iov_md": false 00:07:22.342 }, 00:07:22.342 "driver_specific": { 00:07:22.342 "gpt": { 00:07:22.342 "base_bdev": "Nvme1n1", 00:07:22.342 "offset_blocks": 256, 00:07:22.342 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:22.342 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:22.342 "partition_name": "SPDK_TEST_first" 00:07:22.342 } 00:07:22.342 } 00:07:22.342 } 00:07:22.342 ]' 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:22.342 { 00:07:22.342 "name": "Nvme1n1p2", 00:07:22.342 "aliases": [ 00:07:22.342 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:22.342 ], 00:07:22.342 "product_name": "GPT Disk", 00:07:22.342 "block_size": 4096, 00:07:22.342 "num_blocks": 655103, 00:07:22.342 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:22.342 "assigned_rate_limits": { 00:07:22.342 "rw_ios_per_sec": 0, 00:07:22.342 "rw_mbytes_per_sec": 0, 00:07:22.342 "r_mbytes_per_sec": 0, 00:07:22.342 "w_mbytes_per_sec": 0 00:07:22.342 }, 00:07:22.342 "claimed": false, 00:07:22.342 "zoned": false, 00:07:22.342 "supported_io_types": { 00:07:22.342 "read": true, 00:07:22.342 "write": true, 00:07:22.342 "unmap": true, 00:07:22.342 "flush": true, 00:07:22.342 "reset": true, 00:07:22.342 "nvme_admin": false, 00:07:22.342 "nvme_io": false, 00:07:22.342 "nvme_io_md": false, 00:07:22.342 "write_zeroes": true, 00:07:22.342 "zcopy": false, 00:07:22.342 "get_zone_info": false, 00:07:22.342 "zone_management": false, 00:07:22.342 "zone_append": false, 00:07:22.342 "compare": true, 00:07:22.342 "compare_and_write": false, 00:07:22.342 "abort": true, 00:07:22.342 "seek_hole": false, 00:07:22.342 "seek_data": false, 00:07:22.342 "copy": true, 00:07:22.342 "nvme_iov_md": false 00:07:22.342 }, 00:07:22.342 "driver_specific": { 00:07:22.342 "gpt": { 00:07:22.342 "base_bdev": "Nvme1n1", 00:07:22.342 "offset_blocks": 655360, 00:07:22.342 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:22.342 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:22.342 "partition_name": "SPDK_TEST_second" 00:07:22.342 } 00:07:22.342 } 00:07:22.342 } 00:07:22.342 ]' 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62556 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62556 ']' 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62556 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62556 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.342 killing process with pid 62556 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62556' 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62556 00:07:22.342 09:13:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62556 00:07:24.272 00:07:24.272 real 0m3.171s 00:07:24.272 user 0m3.308s 00:07:24.272 sys 0m0.394s 00:07:24.272 09:13:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.272 09:13:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:24.272 ************************************ 00:07:24.272 END TEST bdev_gpt_uuid 00:07:24.272 ************************************ 00:07:24.272 09:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:24.272 09:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:24.272 09:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:24.272 09:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:24.272 09:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:24.272 09:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:24.272 09:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:24.272 09:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:24.272 09:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:24.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:24.533 Waiting for block devices as requested 00:07:24.533 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:24.533 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:24.533 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:24.533 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:29.821 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:29.821 09:13:21 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:29.821 09:13:21 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:30.079 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:30.079 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:30.079 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:30.079 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:30.079 09:13:21 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:30.079 00:07:30.079 real 0m55.739s 00:07:30.079 user 1m11.222s 00:07:30.079 sys 0m7.408s 00:07:30.079 09:13:21 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.079 09:13:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.079 ************************************ 00:07:30.079 END TEST blockdev_nvme_gpt 00:07:30.079 ************************************ 00:07:30.079 09:13:21 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:30.079 09:13:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.079 09:13:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.079 09:13:21 -- common/autotest_common.sh@10 -- # set +x 00:07:30.079 ************************************ 00:07:30.079 START TEST nvme 00:07:30.079 ************************************ 00:07:30.079 09:13:21 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:30.079 * Looking for test storage... 00:07:30.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:30.079 09:13:21 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:30.079 09:13:21 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:30.079 09:13:21 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:30.079 09:13:21 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:30.079 09:13:21 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.079 09:13:21 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.079 09:13:21 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.079 09:13:21 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.079 09:13:21 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.079 09:13:21 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.079 09:13:21 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.079 09:13:21 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.079 09:13:21 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.079 09:13:21 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.079 09:13:21 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.079 09:13:21 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:30.079 09:13:21 nvme -- scripts/common.sh@345 -- # : 1 00:07:30.079 09:13:21 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.079 09:13:21 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.079 09:13:21 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:30.079 09:13:21 nvme -- scripts/common.sh@353 -- # local d=1 00:07:30.079 09:13:21 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.079 09:13:21 nvme -- scripts/common.sh@355 -- # echo 1 00:07:30.079 09:13:21 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.079 09:13:21 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:30.079 09:13:21 nvme -- scripts/common.sh@353 -- # local d=2 00:07:30.079 09:13:21 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.079 09:13:21 nvme -- scripts/common.sh@355 -- # echo 2 00:07:30.079 09:13:21 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.079 09:13:21 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.079 09:13:21 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.079 09:13:21 nvme -- scripts/common.sh@368 -- # return 0 00:07:30.079 09:13:21 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.079 09:13:21 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:30.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.079 --rc genhtml_branch_coverage=1 00:07:30.079 --rc genhtml_function_coverage=1 00:07:30.079 --rc genhtml_legend=1 00:07:30.079 --rc geninfo_all_blocks=1 00:07:30.079 --rc geninfo_unexecuted_blocks=1 00:07:30.079 00:07:30.079 ' 00:07:30.079 09:13:21 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:30.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.079 --rc genhtml_branch_coverage=1 00:07:30.079 --rc genhtml_function_coverage=1 00:07:30.079 --rc genhtml_legend=1 00:07:30.079 --rc geninfo_all_blocks=1 00:07:30.079 --rc geninfo_unexecuted_blocks=1 00:07:30.079 00:07:30.079 ' 00:07:30.079 09:13:21 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:30.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.079 --rc genhtml_branch_coverage=1 00:07:30.079 --rc genhtml_function_coverage=1 00:07:30.079 --rc genhtml_legend=1 00:07:30.079 --rc geninfo_all_blocks=1 00:07:30.079 --rc geninfo_unexecuted_blocks=1 00:07:30.079 00:07:30.079 ' 00:07:30.079 09:13:21 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:30.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.079 --rc genhtml_branch_coverage=1 00:07:30.079 --rc genhtml_function_coverage=1 00:07:30.079 --rc genhtml_legend=1 00:07:30.079 --rc geninfo_all_blocks=1 00:07:30.079 --rc geninfo_unexecuted_blocks=1 00:07:30.079 00:07:30.079 ' 00:07:30.080 09:13:21 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:30.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.902 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.902 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.902 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.902 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:31.160 09:13:22 nvme -- nvme/nvme.sh@79 -- # uname 00:07:31.160 Waiting for stub to ready for secondary processes... 00:07:31.160 09:13:22 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:31.160 09:13:22 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:31.160 09:13:22 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:31.160 09:13:22 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:31.160 09:13:22 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:31.160 09:13:22 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:31.160 09:13:22 nvme -- common/autotest_common.sh@1071 -- # stubpid=63190 00:07:31.160 09:13:22 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:31.160 09:13:22 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:31.160 09:13:22 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/63190 ]] 00:07:31.160 09:13:22 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:31.160 09:13:22 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:31.160 [2024-10-08 09:13:22.685795] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:07:31.160 [2024-10-08 09:13:22.686066] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:32.101 [2024-10-08 09:13:23.442734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.101 [2024-10-08 09:13:23.583978] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.101 [2024-10-08 09:13:23.584250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.101 [2024-10-08 09:13:23.584274] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.101 [2024-10-08 09:13:23.595355] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:32.101 [2024-10-08 09:13:23.595401] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:32.101 [2024-10-08 09:13:23.607404] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:32.101 [2024-10-08 09:13:23.607495] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:32.101 [2024-10-08 09:13:23.609367] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:32.101 [2024-10-08 09:13:23.609617] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:32.101 [2024-10-08 09:13:23.609678] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:32.101 [2024-10-08 09:13:23.611433] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:32.101 [2024-10-08 09:13:23.611565] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:32.101 [2024-10-08 09:13:23.611618] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:32.101 [2024-10-08 09:13:23.613676] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:32.101 [2024-10-08 09:13:23.613814] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:32.101 [2024-10-08 09:13:23.613875] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:32.101 [2024-10-08 09:13:23.613913] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:32.101 [2024-10-08 09:13:23.613947] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:32.101 09:13:23 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:32.101 done. 00:07:32.101 09:13:23 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:32.101 09:13:23 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:32.101 09:13:23 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:32.101 09:13:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.101 09:13:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:32.101 ************************************ 00:07:32.101 START TEST nvme_reset 00:07:32.101 ************************************ 00:07:32.101 09:13:23 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:32.359 Initializing NVMe Controllers 00:07:32.359 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:32.359 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:32.359 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:32.359 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:32.359 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:32.359 00:07:32.359 ************************************ 00:07:32.359 END TEST nvme_reset 00:07:32.359 ************************************ 00:07:32.359 real 0m0.199s 00:07:32.359 user 0m0.056s 00:07:32.359 sys 0m0.096s 00:07:32.359 09:13:23 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.359 09:13:23 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:32.359 09:13:23 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:32.359 09:13:23 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.359 09:13:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.359 09:13:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:32.359 ************************************ 00:07:32.359 START TEST nvme_identify 00:07:32.359 ************************************ 00:07:32.359 09:13:23 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:32.359 09:13:23 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:32.359 09:13:23 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:32.359 09:13:23 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:32.359 09:13:23 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:32.359 09:13:23 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:32.359 09:13:23 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:32.359 09:13:23 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:32.359 09:13:23 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:32.359 09:13:23 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:32.359 09:13:23 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:32.359 09:13:23 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:32.359 09:13:23 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:32.619 [2024-10-08 09:13:24.117612] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 63211 terminated unexpected 00:07:32.619 ===================================================== 00:07:32.619 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:32.619 ===================================================== 00:07:32.619 Controller Capabilities/Features 00:07:32.619 ================================ 00:07:32.619 Vendor ID: 1b36 00:07:32.619 Subsystem Vendor ID: 1af4 00:07:32.619 Serial Number: 12340 00:07:32.619 Model Number: QEMU NVMe Ctrl 00:07:32.619 Firmware Version: 8.0.0 00:07:32.619 Recommended Arb Burst: 6 00:07:32.619 IEEE OUI Identifier: 00 54 52 00:07:32.619 Multi-path I/O 00:07:32.619 May have multiple subsystem ports: No 00:07:32.619 May have multiple controllers: No 00:07:32.619 Associated with SR-IOV VF: No 00:07:32.619 Max Data Transfer Size: 524288 00:07:32.619 Max Number of Namespaces: 256 00:07:32.619 Max Number of I/O Queues: 64 00:07:32.619 NVMe Specification Version (VS): 1.4 00:07:32.619 NVMe Specification Version (Identify): 1.4 00:07:32.619 Maximum Queue Entries: 2048 00:07:32.619 Contiguous Queues Required: Yes 00:07:32.619 Arbitration Mechanisms Supported 00:07:32.619 Weighted Round Robin: Not Supported 00:07:32.619 Vendor Specific: Not Supported 00:07:32.619 Reset Timeout: 7500 ms 00:07:32.619 Doorbell Stride: 4 bytes 00:07:32.619 NVM Subsystem Reset: Not Supported 00:07:32.619 Command Sets Supported 00:07:32.620 NVM Command Set: Supported 00:07:32.620 Boot Partition: Not Supported 00:07:32.620 Memory Page Size Minimum: 4096 bytes 00:07:32.620 Memory Page Size Maximum: 65536 bytes 00:07:32.620 Persistent Memory Region: Not Supported 00:07:32.620 Optional Asynchronous Events Supported 00:07:32.620 Namespace Attribute Notices: Supported 00:07:32.620 Firmware Activation Notices: Not Supported 00:07:32.620 ANA Change Notices: Not Supported 00:07:32.620 PLE Aggregate Log Change Notices: Not Supported 00:07:32.620 LBA Status Info Alert Notices: Not Supported 00:07:32.620 EGE Aggregate Log Change Notices: Not Supported 00:07:32.620 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.620 Zone Descriptor Change Notices: Not Supported 00:07:32.620 Discovery Log Change Notices: Not Supported 00:07:32.620 Controller Attributes 00:07:32.620 128-bit Host Identifier: Not Supported 00:07:32.620 Non-Operational Permissive Mode: Not Supported 00:07:32.620 NVM Sets: Not Supported 00:07:32.620 Read Recovery Levels: Not Supported 00:07:32.620 Endurance Groups: Not Supported 00:07:32.620 Predictable Latency Mode: Not Supported 00:07:32.620 Traffic Based Keep ALive: Not Supported 00:07:32.620 Namespace Granularity: Not Supported 00:07:32.620 SQ Associations: Not Supported 00:07:32.620 UUID List: Not Supported 00:07:32.620 Multi-Domain Subsystem: Not Supported 00:07:32.620 Fixed Capacity Management: Not Supported 00:07:32.620 Variable Capacity Management: Not Supported 00:07:32.620 Delete Endurance Group: Not Supported 00:07:32.620 Delete NVM Set: Not Supported 00:07:32.620 Extended LBA Formats Supported: Supported 00:07:32.620 Flexible Data Placement Supported: Not Supported 00:07:32.620 00:07:32.620 Controller Memory Buffer Support 00:07:32.620 ================================ 00:07:32.620 Supported: No 00:07:32.620 00:07:32.620 Persistent Memory Region Support 00:07:32.620 ================================ 00:07:32.620 Supported: No 00:07:32.620 00:07:32.620 Admin Command Set Attributes 00:07:32.620 ============================ 00:07:32.620 Security Send/Receive: Not Supported 00:07:32.620 Format NVM: Supported 00:07:32.620 Firmware Activate/Download: Not Supported 00:07:32.620 Namespace Management: Supported 00:07:32.620 Device Self-Test: Not Supported 00:07:32.620 Directives: Supported 00:07:32.620 NVMe-MI: Not Supported 00:07:32.620 Virtualization Management: Not Supported 00:07:32.620 Doorbell Buffer Config: Supported 00:07:32.620 Get LBA Status Capability: Not Supported 00:07:32.620 Command & Feature Lockdown Capability: Not Supported 00:07:32.620 Abort Command Limit: 4 00:07:32.620 Async Event Request Limit: 4 00:07:32.620 Number of Firmware Slots: N/A 00:07:32.620 Firmware Slot 1 Read-Only: N/A 00:07:32.620 Firmware Activation Without Reset: N/A 00:07:32.620 Multiple Update Detection Support: N/A 00:07:32.620 Firmware Update Granularity: No Information Provided 00:07:32.620 Per-Namespace SMART Log: Yes 00:07:32.620 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.620 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:32.620 Command Effects Log Page: Supported 00:07:32.620 Get Log Page Extended Data: Supported 00:07:32.620 Telemetry Log Pages: Not Supported 00:07:32.620 Persistent Event Log Pages: Not Supported 00:07:32.620 Supported Log Pages Log Page: May Support 00:07:32.620 Commands Supported & Effects Log Page: Not Supported 00:07:32.620 Feature Identifiers & Effects Log Page:May Support 00:07:32.620 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.620 Data Area 4 for Telemetry Log: Not Supported 00:07:32.620 Error Log Page Entries Supported: 1 00:07:32.620 Keep Alive: Not Supported 00:07:32.620 00:07:32.620 NVM Command Set Attributes 00:07:32.620 ========================== 00:07:32.620 Submission Queue Entry Size 00:07:32.620 Max: 64 00:07:32.620 Min: 64 00:07:32.620 Completion Queue Entry Size 00:07:32.620 Max: 16 00:07:32.620 Min: 16 00:07:32.620 Number of Namespaces: 256 00:07:32.620 Compare Command: Supported 00:07:32.620 Write Uncorrectable Command: Not Supported 00:07:32.620 Dataset Management Command: Supported 00:07:32.620 Write Zeroes Command: Supported 00:07:32.620 Set Features Save Field: Supported 00:07:32.620 Reservations: Not Supported 00:07:32.620 Timestamp: Supported 00:07:32.620 Copy: Supported 00:07:32.620 Volatile Write Cache: Present 00:07:32.620 Atomic Write Unit (Normal): 1 00:07:32.620 Atomic Write Unit (PFail): 1 00:07:32.620 Atomic Compare & Write Unit: 1 00:07:32.620 Fused Compare & Write: Not Supported 00:07:32.620 Scatter-Gather List 00:07:32.620 SGL Command Set: Supported 00:07:32.620 SGL Keyed: Not Supported 00:07:32.620 SGL Bit Bucket Descriptor: Not Supported 00:07:32.620 SGL Metadata Pointer: Not Supported 00:07:32.620 Oversized SGL: Not Supported 00:07:32.620 SGL Metadata Address: Not Supported 00:07:32.620 SGL Offset: Not Supported 00:07:32.620 Transport SGL Data Block: Not Supported 00:07:32.620 Replay Protected Memory Block: Not Supported 00:07:32.620 00:07:32.620 Firmware Slot Information 00:07:32.620 ========================= 00:07:32.620 Active slot: 1 00:07:32.620 Slot 1 Firmware Revision: 1.0 00:07:32.620 00:07:32.620 00:07:32.620 Commands Supported and Effects 00:07:32.620 ============================== 00:07:32.620 Admin Commands 00:07:32.620 -------------- 00:07:32.620 Delete I/O Submission Queue (00h): Supported 00:07:32.620 Create I/O Submission Queue (01h): Supported 00:07:32.620 Get Log Page (02h): Supported 00:07:32.620 Delete I/O Completion Queue (04h): Supported 00:07:32.620 Create I/O Completion Queue (05h): Supported 00:07:32.620 Identify (06h): Supported 00:07:32.620 Abort (08h): Supported 00:07:32.620 Set Features (09h): Supported 00:07:32.620 Get Features (0Ah): Supported 00:07:32.620 Asynchronous Event Request (0Ch): Supported 00:07:32.620 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.620 Directive Send (19h): Supported 00:07:32.620 Directive Receive (1Ah): Supported 00:07:32.620 Virtualization Management (1Ch): Supported 00:07:32.620 Doorbell Buffer Config (7Ch): Supported 00:07:32.620 Format NVM (80h): Supported LBA-Change 00:07:32.620 I/O Commands 00:07:32.620 ------------ 00:07:32.620 Flush (00h): Supported LBA-Change 00:07:32.620 Write (01h): Supported LBA-Change 00:07:32.620 Read (02h): Supported 00:07:32.620 Compare (05h): Supported 00:07:32.620 Write Zeroes (08h): Supported LBA-Change 00:07:32.620 Dataset Management (09h): Supported LBA-Change 00:07:32.620 Unknown (0Ch): Supported 00:07:32.620 Unknown (12h): Supported 00:07:32.620 Copy (19h): Supported LBA-Change 00:07:32.620 Unknown (1Dh): Supported LBA-Change 00:07:32.620 00:07:32.620 Error Log 00:07:32.620 ========= 00:07:32.620 00:07:32.620 Arbitration 00:07:32.620 =========== 00:07:32.620 Arbitration Burst: no limit 00:07:32.620 00:07:32.620 Power Management 00:07:32.620 ================ 00:07:32.620 Number of Power States: 1 00:07:32.620 Current Power State: Power State #0 00:07:32.620 Power State #0: 00:07:32.620 Max Power: 25.00 W 00:07:32.620 Non-Operational State: Operational 00:07:32.620 Entry Latency: 16 microseconds 00:07:32.620 Exit Latency: 4 microseconds 00:07:32.620 Relative Read Throughput: 0 00:07:32.620 Relative Read Latency: 0 00:07:32.620 Relative Write Throughput: 0 00:07:32.620 Relative Write Latency: 0 00:07:32.620 Idle Power: Not Reported 00:07:32.620 Active Power: Not Reported 00:07:32.620 Non-Operational Permissive Mode: Not Supported 00:07:32.620 00:07:32.620 Health Information 00:07:32.620 ================== 00:07:32.620 Critical Warnings: 00:07:32.620 Available Spare Space: OK 00:07:32.620 Temperature: OK 00:07:32.620 Device Reliability: OK 00:07:32.620 Read Only: No 00:07:32.620 Volatile Memory Backup: OK 00:07:32.620 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.620 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.620 Available Spare: 0% 00:07:32.620 Available Spare Threshold: 0% 00:07:32.620 Life Percentage Used: 0% 00:07:32.620 Data Units Read: 748 00:07:32.620 Data Units Written: 677 00:07:32.620 Host Read Commands: 42406 00:07:32.620 Host Write Commands: 42192 00:07:32.620 Controller Busy Time: 0 minutes 00:07:32.620 Power Cycles: 0 00:07:32.620 Power On Hours: 0 hours 00:07:32.620 Unsafe Shutdowns: 0 00:07:32.620 Unrecoverable Media Errors: 0 00:07:32.620 Lifetime Error Log Entries: 0 00:07:32.620 Warning Temperature Time: 0 minutes 00:07:32.620 Critical Temperature Time: 0 minutes 00:07:32.620 00:07:32.620 Number of Queues 00:07:32.620 ================ 00:07:32.620 Number of I/O Submission Queues: 64 00:07:32.620 Number of I/O Completion Queues: 64 00:07:32.620 00:07:32.620 ZNS Specific Controller Data 00:07:32.620 ============================ 00:07:32.620 Zone Append Size Limit: 0 00:07:32.621 00:07:32.621 00:07:32.621 Active Namespaces 00:07:32.621 ================= 00:07:32.621 Namespace ID:1 00:07:32.621 Error Recovery Timeout: Unlimited 00:07:32.621 Command Set Identifier: NVM (00h) 00:07:32.621 Deallocate: Supported 00:07:32.621 Deallocated/Unwritten Error: Supported 00:07:32.621 Deallocated Read Value: All 0x00 00:07:32.621 Deallocate in Write Zeroes: Not Supported 00:07:32.621 Deallocated Guard Field: 0xFFFF 00:07:32.621 Flush: Supported 00:07:32.621 Reservation: Not Supported 00:07:32.621 Metadata Transferred as: Separate Metadata Buffer 00:07:32.621 Namespace Sharing Capabilities: Private 00:07:32.621 Size (in LBAs): 1548666 (5GiB) 00:07:32.621 Capacity (in LBAs): 1548666 (5GiB) 00:07:32.621 Utilization (in LBAs): 1548666 (5GiB) 00:07:32.621 Thin Provisioning: Not Supported 00:07:32.621 Per-NS Atomic Units: No 00:07:32.621 Maximum Single Source Range Length: 128 00:07:32.621 Maximum Copy Length: 128 00:07:32.621 Maximum Source Range Count: 128 00:07:32.621 NGUID/EUI64 Never Reused: No 00:07:32.621 Namespace Write Protected: No 00:07:32.621 Number of LBA Formats: 8 00:07:32.621 Current LBA Format: LBA Format #07 00:07:32.621 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.621 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.621 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.621 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.621 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.621 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.621 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.621 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.621 00:07:32.621 NVM Specific Namespace Data 00:07:32.621 =========================== 00:07:32.621 Logical Block Storage Tag Mask: 0 00:07:32.621 Protection Information Capabilities: 00:07:32.621 16b Guard Protection Information Storage Tag Support: No 00:07:32.621 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.621 Storage Tag Check Read Support: No 00:07:32.621 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.621 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.621 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.621 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.621 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.621 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.621 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.621 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.621 ===================================================== 00:07:32.621 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:32.621 ===================================================== 00:07:32.621 Controller Capabilities/Features 00:07:32.621 ================================ 00:07:32.621 Vendor ID: 1b36 00:07:32.621 Subsystem Vendor ID: 1af4 00:07:32.621 Serial Number: 12341 00:07:32.621 Model Number: QEMU NVMe Ctrl 00:07:32.621 Firmware Version: 8.0.0 00:07:32.621 Recommended Arb Burst: 6 00:07:32.621 IEEE OUI Identifier: 00 54 52 00:07:32.621 Multi-path I/O 00:07:32.621 May have multiple subsystem ports: No 00:07:32.621 May have multiple controllers: No 00:07:32.621 Associated with SR-IOV VF: No 00:07:32.621 Max Data Transfer Size: 524288 00:07:32.621 Max Number of Namespaces: 256 00:07:32.621 Max Number of I/O Queues: 64 00:07:32.621 NVMe Specification Version (VS): 1.4 00:07:32.621 NVMe Specification Version (Identify): 1.4 00:07:32.621 Maximum Queue Entries: 2048 00:07:32.621 Contiguous Queues Required: Yes 00:07:32.621 Arbitration Mechanisms Supported 00:07:32.621 Weighted Round Robin: Not Supported 00:07:32.621 Vendor Specific: Not Supported 00:07:32.621 Reset Timeout: 7500 ms 00:07:32.621 Doorbell Stride: 4 bytes 00:07:32.621 NVM Subsystem Reset: Not Supported 00:07:32.621 Command Sets Supported 00:07:32.621 NVM Command Set: Supported 00:07:32.621 Boot Partition: Not Supported 00:07:32.621 Memory Page Size Minimum: 4096 bytes 00:07:32.621 Memory Page Size Maximum: 65536 bytes 00:07:32.621 Persistent Memory Region: Not Supported 00:07:32.621 Optional Asynchronous Events Supported 00:07:32.621 Namespace Attribute Notices: Supported 00:07:32.621 Firmware Activation Notices: Not Supported 00:07:32.621 ANA Change Notices: Not Supported 00:07:32.621 PLE Aggregate Log Change Notices: Not Supported 00:07:32.621 LBA Status Info Alert Notices: Not Supported 00:07:32.621 EGE Aggregate Log Change Notices: Not Supported 00:07:32.621 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.621 Zone Descriptor Change Notices: Not Supported 00:07:32.621 Discovery Log Change Notices: Not Supported 00:07:32.621 Controller Attributes 00:07:32.621 128-bit Host Identifier: Not Supported 00:07:32.621 Non-Operational Permissive Mode: Not Supported 00:07:32.621 NVM Sets: Not Supported 00:07:32.621 Read Recovery Levels: Not Supported 00:07:32.621 Endurance Groups: Not Supported 00:07:32.621 Predictable Latency Mode: Not Supported 00:07:32.621 Traffic Based Keep ALive: Not Supported 00:07:32.621 Namespace Granularity: Not Supported 00:07:32.621 SQ Associations: Not Supported 00:07:32.621 UUID List: Not Supported 00:07:32.621 Multi-Domain Subsystem: Not Supported 00:07:32.621 Fixed Capacity Management: Not Supported 00:07:32.621 Variable Capacity Management: Not Supported 00:07:32.621 Delete Endurance Group: Not Supported 00:07:32.621 Delete NVM Set: Not Supported 00:07:32.621 Extended LBA Formats Supported: Supported 00:07:32.621 Flexible Data Placement Supported: Not Supported 00:07:32.621 00:07:32.621 Controller Memory Buffer Support 00:07:32.621 ================================ 00:07:32.621 Supported: No 00:07:32.621 00:07:32.621 Persistent Memory Region Support 00:07:32.621 ================================ 00:07:32.621 Supported: No 00:07:32.621 00:07:32.621 Admin Command Set Attributes 00:07:32.621 ============================ 00:07:32.621 Security Send/Receive: Not Supported 00:07:32.621 Format NVM: Supported 00:07:32.621 Firmware Activate/Download: Not Supported 00:07:32.621 Namespace Management: Supported 00:07:32.621 Device Self-Test: Not Supported 00:07:32.621 Directives: Supported 00:07:32.621 NVMe-MI: Not Supported 00:07:32.621 Virtualization Management: Not Supported 00:07:32.621 Doorbell Buffer Config: Supported 00:07:32.621 Get LBA Status Capability: Not Supported 00:07:32.621 Command & Feature Lockdown Capability: Not Supported 00:07:32.621 Abort Command Limit: 4 00:07:32.621 Async Event Request Limit: 4 00:07:32.621 Number of Firmware Slots: N/A 00:07:32.621 Firmware Slot 1 Read-Only: N/A 00:07:32.621 Firmware Activation Without Reset: N/A 00:07:32.621 Multiple Update Detection Support: N/A 00:07:32.621 Firmware Update Granularity: No Information Provided 00:07:32.621 Per-Namespace SMART Log: Yes 00:07:32.621 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.621 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:32.621 Command Effects Log Page: Supported 00:07:32.621 Get Log Page Extended Data: Supported 00:07:32.621 Telemetry Log Pages: Not Supported 00:07:32.621 Persistent Event Log Pages: Not Supported 00:07:32.621 Supported Log Pages Log Page: May Support 00:07:32.621 Commands Supported & Effects Log Page: Not Supported 00:07:32.621 Feature Identifiers & Effects Log Page:May Support 00:07:32.621 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.621 Data Area 4 for Telemetry Log: Not Supported 00:07:32.621 Error Log Page Entries Supported: 1 00:07:32.621 Keep Alive: Not Supported 00:07:32.621 00:07:32.621 NVM Command Set Attributes 00:07:32.621 ========================== 00:07:32.621 Submission Queue Entry Size 00:07:32.621 Max: 64 00:07:32.621 Min: 64 00:07:32.621 Completion Queue Entry Size 00:07:32.621 Max: 16 00:07:32.621 Min: 16 00:07:32.621 Number of Namespaces: 256 00:07:32.621 Compare Command: Supported 00:07:32.621 Write Uncorrectable Command: Not Supported 00:07:32.621 Dataset Management Command: Supported 00:07:32.621 Write Zeroes Command: Supported 00:07:32.621 Set Features Save Field: Supported 00:07:32.621 Reservations: Not Supported 00:07:32.621 Timestamp: Supported 00:07:32.621 Copy: Supported 00:07:32.621 Volatile Write Cache: Present 00:07:32.621 Atomic Write Unit (Normal): 1 00:07:32.621 Atomic Write Unit (PFail): 1 00:07:32.621 Atomic Compare & Write Unit: 1 00:07:32.621 Fused Compare & Write: Not Supported 00:07:32.621 Scatter-Gather List 00:07:32.621 SGL Command Set: Supported 00:07:32.621 SGL Keyed: Not Supported 00:07:32.621 SGL Bit Bucket Descriptor: Not Supported 00:07:32.621 SGL Metadata Pointer: Not Supported 00:07:32.621 Oversized SGL: Not Supported 00:07:32.621 SGL Metadata Address: Not Supported 00:07:32.621 SGL Offset: Not Supported 00:07:32.621 Transport SGL Data Block: Not Supported 00:07:32.621 Replay Protected Memory Block: Not Supported 00:07:32.621 00:07:32.621 Firmware Slot Information 00:07:32.621 ========================= 00:07:32.621 Active slot: 1 00:07:32.621 Slot 1 Firmware Revision: 1.0 00:07:32.621 00:07:32.621 00:07:32.621 Commands Supported and Effects 00:07:32.621 ============================== 00:07:32.621 Admin Commands 00:07:32.622 -------------- 00:07:32.622 Delete I/O Submission Queue (00h): Supported 00:07:32.622 Create I/O Submission Queue (01h): Supported 00:07:32.622 Get Log Page (02h): Supported 00:07:32.622 Delete I/O Completion Queue (04h): Supported 00:07:32.622 Create I/O Completion Queue (05h): Supported 00:07:32.622 Identify (06h): Supported 00:07:32.622 Abort (08h): Supported 00:07:32.622 Set Features (09h): Supported 00:07:32.622 Get Features (0Ah): Supported 00:07:32.622 Asynchronous Event Request (0Ch): Supported 00:07:32.622 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.622 Directive Send (19h): Supported 00:07:32.622 Directive Receive (1Ah): Supported 00:07:32.622 Virtualization Management (1Ch): Supported 00:07:32.622 Doorbell Buffer Config (7Ch): Supported 00:07:32.622 Format NVM (80h): Supported LBA-Change 00:07:32.622 I/O Commands 00:07:32.622 ------------ 00:07:32.622 Flush (00h): Supported LBA-Change 00:07:32.622 Write (01h): Supported LBA-Change 00:07:32.622 Read (02h): Supported 00:07:32.622 Compare (05h): Supported 00:07:32.622 Write Zeroes (08h): Supported LBA-Change 00:07:32.622 Dataset Management (09h): Supported LBA-Change 00:07:32.622 Unknown (0Ch): Supported 00:07:32.622 Unknown (12h): Supported 00:07:32.622 Copy (19h): Supported LBA-Change 00:07:32.622 Unknown (1Dh): Supported LBA-Change 00:07:32.622 00:07:32.622 Error Log 00:07:32.622 ========= 00:07:32.622 00:07:32.622 Arbitration 00:07:32.622 =========== 00:07:32.622 Arbitration Burst: no limit 00:07:32.622 00:07:32.622 Power Management 00:07:32.622 ================ 00:07:32.622 Number of Power States: 1 00:07:32.622 Current Power State: Power State #0 00:07:32.622 Power State #0: 00:07:32.622 Max Power: 25.00 W 00:07:32.622 Non-Operational State: Operational 00:07:32.622 Entry Latency: 16 microseconds 00:07:32.622 Exit Latency: 4 microseconds 00:07:32.622 Relative Read Throughput: 0 00:07:32.622 Relative Read Latency: 0 00:07:32.622 Relative Write Throughput: 0 00:07:32.622 Relative Write Latency: 0 00:07:32.622 Idle Power: Not Reported 00:07:32.622 Active Power: Not Reported 00:07:32.622 Non-Operational Permissive Mode: Not Supported 00:07:32.622 00:07:32.622 Health Information 00:07:32.622 ================== 00:07:32.622 Critical Warnings: 00:07:32.622 Available Spare Space: OK 00:07:32.622 Temperature: OK 00:07:32.622 Device Reliability: OK 00:07:32.622 Read Only: No 00:07:32.622 Volatile Memory Backup: OK 00:07:32.622 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.622 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.622 Available Spare: 0% 00:07:32.622 Available Spare Threshold: 0% 00:07:32.622 Life Percentage Used: 0% 00:07:32.622 Data Units Read: 1157 00:07:32.622 Data Units Written: 1024 00:07:32.622 Host Read Commands: 62913 00:07:32.622 Host Write Commands: 61709 00:07:32.622 Controller Busy Time: 0 minutes 00:07:32.622 Power Cycles: 0 00:07:32.622 Power On Hours: 0 hours 00:07:32.622 Unsafe Shutdowns: 0 00:07:32.622 Unrecoverable Media Errors: 0 00:07:32.622 Lifetime Error Log Entries: 0 00:07:32.622 Warning Temperature Time: 0 minutes 00:07:32.622 Critical Temperature Time: 0 minutes 00:07:32.622 00:07:32.622 Number of Queues 00:07:32.622 ================ 00:07:32.622 Number of I/O Submission Queues: 64 00:07:32.622 Number of I/O Completion Queues: 64 00:07:32.622 00:07:32.622 ZNS Specific Controller Data 00:07:32.622 ============================ 00:07:32.622 Zone Append Size Limit: 0 00:07:32.622 00:07:32.622 00:07:32.622 Active Namespaces 00:07:32.622 ================= 00:07:32.622 Namespace ID:1 00:07:32.622 Error Recovery Timeout: Unlimited 00:07:32.622 Command Set Identifier: NVM (00h) 00:07:32.622 Deallocate: Supported 00:07:32.622 Deallocated/Unwritten Error: Supported 00:07:32.622 Deallocated Read Value: All 0x00 00:07:32.622 Deallocate in Write Zeroes: Not Supported 00:07:32.622 Deallocated Guard Field: 0xFFFF 00:07:32.622 Flush: Supported 00:07:32.622 Reservation: Not Supported 00:07:32.622 Namespace Sharing Capabilities: Private 00:07:32.622 Size (in LBAs): 1310720 (5GiB) 00:07:32.622 Capacity (in LBAs): 1310720 (5GiB) 00:07:32.622 Utilization (in LBAs): 1310720 (5GiB) 00:07:32.622 Thin Provisioning: Not Supported 00:07:32.622 Per-NS Atomic Units: No 00:07:32.622 Maximum Single Source Range Length: 128 00:07:32.622 Maximum Copy Length: 128 00:07:32.622 Maximum Source Range Count: 128 00:07:32.622 NGUID/EUI64 Never Reused: No 00:07:32.622 Namespace Write Protected: No 00:07:32.622 Number of LBA Formats: 8 00:07:32.622 Current LBA Format: LBA Format #04 00:07:32.622 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.622 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.622 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.622 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.622 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.622 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.622 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.622 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.622 00:07:32.622 NVM Specific Namespace Data 00:07:32.622 =========================== 00:07:32.622 Logical Block Storage Tag Mask: 0 00:07:32.622 Protection Information Capabilities: 00:07:32.622 16b Guard Protection Information Storage Tag Support: No 00:07:32.622 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.622 Storage Tag Check Read Support: No 00:07:32.622 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.622 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.622 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.622 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.622 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.622 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.622 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.622 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.622 ===================================================== 00:07:32.622 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:32.622 ===================================================== 00:07:32.622 Controller Capabilities/Features 00:07:32.622 ================================ 00:07:32.622 Vendor ID: 1b36 00:07:32.622 Subsystem Vendor ID: 1af4 00:07:32.622 Serial Number: 12343 00:07:32.622 Model Number: QEMU NVMe Ctrl 00:07:32.622 Firmware Version: 8.0.0 00:07:32.622 Recommended Arb Burst: 6 00:07:32.622 IEEE OUI Identifier: 00 54 52 00:07:32.622 Multi-path I/O 00:07:32.622 May have multiple subsystem ports: No 00:07:32.622 May have multiple controllers: Yes 00:07:32.622 Associated with SR-IOV VF: No 00:07:32.622 Max Data Transfer Size: 524288 00:07:32.622 Max Number of Namespaces: 256 00:07:32.622 Max Number of I/O Queues: 64 00:07:32.622 NVMe Specification Version (VS): 1.4 00:07:32.622 NVMe Specification Version (Identify): 1.4 00:07:32.622 Maximum Queue Entries: 2048 00:07:32.622 Contiguous Queues Required: Yes 00:07:32.622 Arbitration Mechanisms Supported 00:07:32.622 Weighted Round Robin: Not Supported 00:07:32.622 Vendor Specific: Not Supported 00:07:32.622 Reset Timeout: 7500 ms 00:07:32.622 Doorbell Stride: 4 bytes 00:07:32.622 NVM Subsystem Reset: Not Supported 00:07:32.622 Command Sets Supported 00:07:32.622 NVM Command Set: Supported 00:07:32.622 Boot Partition: Not Supported 00:07:32.622 Memory Page Size Minimum: 4096 bytes 00:07:32.622 Memory Page Size Maximum: 65536 bytes 00:07:32.622 Persistent Memory Region: Not Supported 00:07:32.622 Optional Asynchronous Events Supported 00:07:32.622 Namespace Attribute Notices: Supported 00:07:32.622 Firmware Activation Notices: Not Supported 00:07:32.622 ANA Change Notices: Not Supported 00:07:32.622 PLE Aggregate Log Change Notices: Not Supported 00:07:32.622 LBA Status Info Alert Notices: Not Supported 00:07:32.622 EGE Aggregate Log Change Notices: Not Supported 00:07:32.622 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.622 Zone Descriptor Change Notices: Not Supported 00:07:32.622 Discovery Log Change Notices: Not Supported 00:07:32.622 Controller Attributes 00:07:32.622 128-bit Host Identifier: Not Supported 00:07:32.622 Non-Operational Permissive Mode: Not Supported 00:07:32.622 NVM Sets: Not Supported 00:07:32.622 Read Recovery Levels: Not Supported 00:07:32.622 Endurance Groups: Supported 00:07:32.622 Predictable Latency Mode: Not Supported 00:07:32.622 Traffic Based Keep ALive: Not Supported 00:07:32.622 Namespace Granularity: Not Supported 00:07:32.622 SQ Associations: Not Supported 00:07:32.622 UUID List: Not Supported 00:07:32.622 Multi-Domain Subsystem: Not Supported 00:07:32.622 Fixed Capacity Management: Not Supported 00:07:32.622 Variable Capacity Management: Not Supported 00:07:32.622 Delete Endurance Group: Not Supported 00:07:32.622 Delete NVM Set: Not Supported 00:07:32.622 Extended LBA Formats Supported: Supported 00:07:32.622 Flexible Data Placement Supported: Supported 00:07:32.622 00:07:32.623 Controller Memory Buffer Support 00:07:32.623 ================================ 00:07:32.623 Supported: No 00:07:32.623 00:07:32.623 Persistent Memory Region Support 00:07:32.623 ================================ 00:07:32.623 Supported: No 00:07:32.623 00:07:32.623 Admin Command Set Attributes 00:07:32.623 ============================ 00:07:32.623 Security Send/Receive: Not Supported 00:07:32.623 Format NVM: Supported 00:07:32.623 Firmware Activate/Download: Not Supported 00:07:32.623 Namespace Management: Supported 00:07:32.623 Device Self-Test: Not Supported 00:07:32.623 Directives: Supported 00:07:32.623 NVMe-MI: Not Supported 00:07:32.623 Virtualization Management: Not Supported 00:07:32.623 Doorbell Buffer Config: Supported 00:07:32.623 Get LBA Status Capability: Not Supported 00:07:32.623 Command & Feature Lockdown Capability: Not Supported 00:07:32.623 Abort Command Limit: 4 00:07:32.623 Async Event Request Limit: 4 00:07:32.623 Number of Firmware Slots: N/A 00:07:32.623 Firmware Slot 1 Read-Only: N/A 00:07:32.623 Firmware Activation Without Reset: N/A 00:07:32.623 Multiple Update Detection Support: N/A 00:07:32.623 Firmware Update Granularity: No Information Provided 00:07:32.623 Per-Namespace SMART Log: Yes 00:07:32.623 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.623 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:32.623 Command Effects Log Page: Supported 00:07:32.623 Get Log Page Extended Data: Supported 00:07:32.623 Telemetry Log Pages: Not Supported 00:07:32.623 Persistent Event Log Pages: Not Supported 00:07:32.623 Supported Log Pages Log Page: May Support 00:07:32.623 Commands Supported & Effects Log Page: Not Supported 00:07:32.623 Feature Identifiers & Effects Log Page:May Support 00:07:32.623 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.623 Data Area 4 for Telemetry Log: Not Supported 00:07:32.623 Error Log Page Entries Supported: 1 00:07:32.623 Keep Alive: Not Supported 00:07:32.623 00:07:32.623 NVM Command Set Attributes 00:07:32.623 ========================== 00:07:32.623 Submission Queue Entry Size 00:07:32.623 Max: 64 00:07:32.623 Min: 64 00:07:32.623 Completion Queue Entry Size 00:07:32.623 Max: 16 00:07:32.623 Min: 16 00:07:32.623 Number of Namespaces: 256 00:07:32.623 Compare Command: Supported 00:07:32.623 Write Uncorrectable Command: Not Supported 00:07:32.623 Dataset Management Command: Supported 00:07:32.623 Write Zeroes Command: Supported 00:07:32.623 Set Features Save Field: Supported 00:07:32.623 Reservations: Not Supported 00:07:32.623 Timestamp: Supported 00:07:32.623 Copy: Supported 00:07:32.623 Volatile Write Cache: Present 00:07:32.623 Atomic Write Unit (Normal): 1 00:07:32.623 Atomic Write Unit (PFail): 1 00:07:32.623 Atomic Compare & Write Unit: 1 00:07:32.623 Fused Compare & Write: Not Supported 00:07:32.623 Scatter-Gather List 00:07:32.623 SGL Command Set: Supported 00:07:32.623 SGL Keyed: Not Supported 00:07:32.623 SGL Bit Bucket Descriptor: Not Supported 00:07:32.623 SGL Metadata Pointer: Not Supported 00:07:32.623 Oversized SGL: Not Supported 00:07:32.623 SGL Metadata Address: Not Supported 00:07:32.623 SGL Offset: Not Supported 00:07:32.623 Transport SGL Data Block: Not Supported 00:07:32.623 Replay Protected Memory Block: Not Supported 00:07:32.623 00:07:32.623 Firmware Slot Information 00:07:32.623 ========================= 00:07:32.623 Active slot: 1 00:07:32.623 Slot 1 Firmware Revision: 1.0 00:07:32.623 00:07:32.623 00:07:32.623 Commands Supported and Effects 00:07:32.623 ============================== 00:07:32.623 Admin Commands 00:07:32.623 -------------- 00:07:32.623 Delete I/O Submission Queue (00h): Supported 00:07:32.623 Create I/O Submission Queue (01h): Supported 00:07:32.623 Get Log Page (02h): Supported 00:07:32.623 Delete I/O Completion Queue (04h): Supported 00:07:32.623 Create I/O Completion Queue (05h): Supported 00:07:32.623 Identify (06h): Supported 00:07:32.623 Abort (08h): Supported 00:07:32.623 Set Features (09h): Supported 00:07:32.623 Get Features (0Ah): Supported 00:07:32.623 Asynchronous Event Request (0Ch): Supported 00:07:32.623 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.623 Directive Send (19h): Supported 00:07:32.623 Directive Receive (1Ah): Supported 00:07:32.623 Virtualization Management (1Ch): Supported 00:07:32.623 Doorbell Buffer Config (7Ch): Supported 00:07:32.623 Format NVM (80h): Supported LBA-Change 00:07:32.623 I/O Commands 00:07:32.623 ------------ 00:07:32.623 Flush (00h): Supported LBA-Change 00:07:32.623 Write (01h): Supported LBA-Change 00:07:32.623 Read (02h): Supported 00:07:32.623 Compare (05h): Supported 00:07:32.623 Write Zeroes (08h): Supported LBA-Change 00:07:32.623 Dataset Management (09h): Supported LBA-Change 00:07:32.623 Unknown (0Ch): Supported 00:07:32.623 Unknown (12h): Supported 00:07:32.623 Copy (19h): Supported LBA-Change 00:07:32.623 Unknown (1Dh): Supported LBA-Change 00:07:32.623 00:07:32.623 Error Log 00:07:32.623 ========= 00:07:32.623 00:07:32.623 Arbitration 00:07:32.623 =========== 00:07:32.623 Arbitration Burst: no limit 00:07:32.623 00:07:32.623 Power Management 00:07:32.623 ================ 00:07:32.623 Number of Power States: 1 00:07:32.623 Current Power State: Power State #0 00:07:32.623 Power State #0: 00:07:32.623 Max Power: 25.00 W 00:07:32.623 Non-Operational State: Operational 00:07:32.623 Entry Latency: 16 microseconds 00:07:32.623 Exit Latency: 4 microseconds 00:07:32.623 Relative Read Throughput: 0 00:07:32.623 Relative Read Latency: 0 00:07:32.623 Relative Write Throughput: 0 00:07:32.623 Relative Write Latency: 0 00:07:32.623 Idle Power: Not Reported 00:07:32.623 Active Power: Not Reported 00:07:32.623 Non-Operational Permissive Mode: Not Supported 00:07:32.623 00:07:32.623 Health Information 00:07:32.623 ================== 00:07:32.623 Critical Warnings: 00:07:32.623 Available Spare Space: OK 00:07:32.623 Temperature: OK 00:07:32.623 Device Reliability: OK 00:07:32.623 Read Only: No 00:07:32.623 Volatile Memory Backup: OK 00:07:32.623 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.623 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.623 Available Spare: 0% 00:07:32.623 Available Spare Threshold: 0% 00:07:32.623 Life Percentage Used: 0% 00:07:32.623 Data Units Read: 901 00:07:32.623 Data Units Written: 830 00:07:32.623 Host Read Commands: 44017 00:07:32.623 Host Write Commands: 43440 00:07:32.623 Controller Busy Time: 0 minutes 00:07:32.623 Power Cycles: 0 00:07:32.623 Power On Hours: 0 hours 00:07:32.623 Unsafe Shutdowns: 0 00:07:32.623 Unrecoverable Media Errors: 0 00:07:32.623 Lifetime Error Log Entries: 0 00:07:32.623 Warning Temperature Time: 0 minutes 00:07:32.623 Critical Temperature Time: 0 minutes 00:07:32.623 00:07:32.623 Number of Queues 00:07:32.623 ================ 00:07:32.623 Number of I/O Submission Queues: 64 00:07:32.623 Number of I/O Completion Queues: 64 00:07:32.623 00:07:32.623 ZNS Specific Controller Data 00:07:32.623 ============================ 00:07:32.623 Zone Append Size Limit: 0 00:07:32.623 00:07:32.623 00:07:32.623 Active Namespaces 00:07:32.623 ================= 00:07:32.623 Namespace ID:1 00:07:32.623 Error Recovery Timeout: Unlimited 00:07:32.623 Command Set Identifier: NVM (00h) 00:07:32.623 Deallocate: Supported 00:07:32.623 Deallocated/Unwritten Error: Supported 00:07:32.623 Deallocated Read Value: All 0x00 00:07:32.623 Deallocate in Write Zeroes: Not Supported 00:07:32.623 Deallocated Guard Field: 0xFFFF 00:07:32.623 Flush: Supported 00:07:32.623 Reservation: Not Supported 00:07:32.623 Namespace Sharing Capabilities: Multiple Controllers 00:07:32.623 Size (in LBAs): 262144 (1GiB) 00:07:32.623 Capacity (in LBAs): 262144 (1GiB) 00:07:32.623 Utilization (in LBAs): 262144 (1GiB) 00:07:32.623 Thin Provisioning: Not Supported 00:07:32.623 Per-NS Atomic Units: No 00:07:32.623 Maximum Single Source Range Length: 128 00:07:32.623 Maximum Copy Length: 128 00:07:32.623 Maximum Source Range Count: 128 00:07:32.623 NGUID/EUI64 Never Reused: No 00:07:32.623 Namespace Write Protected: No 00:07:32.623 Endurance group ID: 1 00:07:32.623 Number of LBA Formats: 8 00:07:32.623 Current LBA Format: LBA Format #04 00:07:32.623 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.623 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.623 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.623 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.623 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.623 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.623 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.623 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.623 00:07:32.623 Get Feature FDP: 00:07:32.623 ================ 00:07:32.623 Enabled: Yes 00:07:32.623 FDP configuration index: 0 00:07:32.623 00:07:32.623 FDP configurations log page 00:07:32.623 =========================== 00:07:32.623 Number of FDP configurations: 1 00:07:32.623 Version: 0 00:07:32.623 Size: 112 00:07:32.623 FDP Configuration Descriptor: 0 00:07:32.623 Descriptor Size: 96 00:07:32.623 Reclaim Group Identifier format: 2 00:07:32.624 FDP Volatile Write Cache: Not Present 00:07:32.624 FDP Configuration: Valid 00:07:32.624 Vendor Specific Size: 0 00:07:32.624 Number of Reclaim Groups: 2 00:07:32.624 Number of Recalim Unit Handles: 8 00:07:32.624 Max Placement Identifiers: 128 00:07:32.624 Number of Namespaces Suppprted: 256 00:07:32.624 Reclaim unit Nominal Size: 6000000 bytes 00:07:32.624 Estimated Reclaim Unit Time Limit: Not Reported 00:07:32.624 RUH Desc #000: RUH Type: Initially Isolated 00:07:32.624 RUH Desc #001: RUH Type: Initially Isolated 00:07:32.624 RUH Desc #002: RUH Type: Initially Isolated 00:07:32.624 RUH Desc #003: RUH Type: Initially Isolated 00:07:32.624 RUH Desc #004: RUH Type: Initially Isolated 00:07:32.624 RUH Desc #005: RUH Type: Initially Isolated 00:07:32.624 RUH Desc #006: RUH Type: Initially Isolated 00:07:32.624 RUH Desc #007: RUH Type: Initially Isolated 00:07:32.624 00:07:32.624 FDP reclaim unit handle usage log page 00:07:32.624 ====================================== 00:07:32.624 Number of Reclaim Unit Handles: 8 00:07:32.624 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:32.624 RUH Usage Desc #001: RUH Attributes: Unused 00:07:32.624 RUH Usage Desc #002: RUH Attributes: Unused 00:07:32.624 RUH Usage Desc #003: RUH Attributes: Unused 00:07:32.624 RUH Usage Desc #004: RUH Attributes: Unused 00:07:32.624 RUH Usage Desc #005: RUH Attributes: Unused 00:07:32.624 RUH Usage Desc #006: RUH Attributes: Unused 00:07:32.624 RUH Usage Desc #007: RUH Attributes: Unused 00:07:32.624 00:07:32.624 FDP statistics log page 00:07:32.624 ======================= 00:07:32.624 Host bytes with metadata written: 522035200 00:07:32.624 Med[2024-10-08 09:13:24.118869] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 63211 terminated unexpected 00:07:32.624 [2024-10-08 09:13:24.119445] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 63211 terminated unexpected 00:07:32.624 [2024-10-08 09:13:24.120371] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 63211 terminated unexpected 00:07:32.624 ia bytes with metadata written: 522092544 00:07:32.624 Media bytes erased: 0 00:07:32.624 00:07:32.624 FDP events log page 00:07:32.624 =================== 00:07:32.624 Number of FDP events: 0 00:07:32.624 00:07:32.624 NVM Specific Namespace Data 00:07:32.624 =========================== 00:07:32.624 Logical Block Storage Tag Mask: 0 00:07:32.624 Protection Information Capabilities: 00:07:32.624 16b Guard Protection Information Storage Tag Support: No 00:07:32.624 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.624 Storage Tag Check Read Support: No 00:07:32.624 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.624 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.624 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.624 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.624 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.624 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.624 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.624 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.624 ===================================================== 00:07:32.624 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:32.624 ===================================================== 00:07:32.624 Controller Capabilities/Features 00:07:32.624 ================================ 00:07:32.624 Vendor ID: 1b36 00:07:32.624 Subsystem Vendor ID: 1af4 00:07:32.624 Serial Number: 12342 00:07:32.624 Model Number: QEMU NVMe Ctrl 00:07:32.624 Firmware Version: 8.0.0 00:07:32.624 Recommended Arb Burst: 6 00:07:32.624 IEEE OUI Identifier: 00 54 52 00:07:32.624 Multi-path I/O 00:07:32.624 May have multiple subsystem ports: No 00:07:32.624 May have multiple controllers: No 00:07:32.624 Associated with SR-IOV VF: No 00:07:32.624 Max Data Transfer Size: 524288 00:07:32.624 Max Number of Namespaces: 256 00:07:32.624 Max Number of I/O Queues: 64 00:07:32.624 NVMe Specification Version (VS): 1.4 00:07:32.624 NVMe Specification Version (Identify): 1.4 00:07:32.624 Maximum Queue Entries: 2048 00:07:32.624 Contiguous Queues Required: Yes 00:07:32.624 Arbitration Mechanisms Supported 00:07:32.624 Weighted Round Robin: Not Supported 00:07:32.624 Vendor Specific: Not Supported 00:07:32.624 Reset Timeout: 7500 ms 00:07:32.624 Doorbell Stride: 4 bytes 00:07:32.624 NVM Subsystem Reset: Not Supported 00:07:32.624 Command Sets Supported 00:07:32.624 NVM Command Set: Supported 00:07:32.624 Boot Partition: Not Supported 00:07:32.624 Memory Page Size Minimum: 4096 bytes 00:07:32.624 Memory Page Size Maximum: 65536 bytes 00:07:32.624 Persistent Memory Region: Not Supported 00:07:32.624 Optional Asynchronous Events Supported 00:07:32.624 Namespace Attribute Notices: Supported 00:07:32.624 Firmware Activation Notices: Not Supported 00:07:32.624 ANA Change Notices: Not Supported 00:07:32.624 PLE Aggregate Log Change Notices: Not Supported 00:07:32.624 LBA Status Info Alert Notices: Not Supported 00:07:32.624 EGE Aggregate Log Change Notices: Not Supported 00:07:32.624 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.624 Zone Descriptor Change Notices: Not Supported 00:07:32.624 Discovery Log Change Notices: Not Supported 00:07:32.624 Controller Attributes 00:07:32.624 128-bit Host Identifier: Not Supported 00:07:32.624 Non-Operational Permissive Mode: Not Supported 00:07:32.624 NVM Sets: Not Supported 00:07:32.624 Read Recovery Levels: Not Supported 00:07:32.624 Endurance Groups: Not Supported 00:07:32.624 Predictable Latency Mode: Not Supported 00:07:32.624 Traffic Based Keep ALive: Not Supported 00:07:32.624 Namespace Granularity: Not Supported 00:07:32.624 SQ Associations: Not Supported 00:07:32.624 UUID List: Not Supported 00:07:32.624 Multi-Domain Subsystem: Not Supported 00:07:32.624 Fixed Capacity Management: Not Supported 00:07:32.624 Variable Capacity Management: Not Supported 00:07:32.624 Delete Endurance Group: Not Supported 00:07:32.624 Delete NVM Set: Not Supported 00:07:32.624 Extended LBA Formats Supported: Supported 00:07:32.624 Flexible Data Placement Supported: Not Supported 00:07:32.624 00:07:32.624 Controller Memory Buffer Support 00:07:32.624 ================================ 00:07:32.624 Supported: No 00:07:32.624 00:07:32.624 Persistent Memory Region Support 00:07:32.624 ================================ 00:07:32.624 Supported: No 00:07:32.624 00:07:32.624 Admin Command Set Attributes 00:07:32.624 ============================ 00:07:32.624 Security Send/Receive: Not Supported 00:07:32.624 Format NVM: Supported 00:07:32.624 Firmware Activate/Download: Not Supported 00:07:32.624 Namespace Management: Supported 00:07:32.624 Device Self-Test: Not Supported 00:07:32.624 Directives: Supported 00:07:32.624 NVMe-MI: Not Supported 00:07:32.624 Virtualization Management: Not Supported 00:07:32.624 Doorbell Buffer Config: Supported 00:07:32.624 Get LBA Status Capability: Not Supported 00:07:32.624 Command & Feature Lockdown Capability: Not Supported 00:07:32.624 Abort Command Limit: 4 00:07:32.624 Async Event Request Limit: 4 00:07:32.624 Number of Firmware Slots: N/A 00:07:32.624 Firmware Slot 1 Read-Only: N/A 00:07:32.624 Firmware Activation Without Reset: N/A 00:07:32.624 Multiple Update Detection Support: N/A 00:07:32.624 Firmware Update Granularity: No Information Provided 00:07:32.624 Per-Namespace SMART Log: Yes 00:07:32.624 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.624 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:32.624 Command Effects Log Page: Supported 00:07:32.624 Get Log Page Extended Data: Supported 00:07:32.624 Telemetry Log Pages: Not Supported 00:07:32.624 Persistent Event Log Pages: Not Supported 00:07:32.624 Supported Log Pages Log Page: May Support 00:07:32.624 Commands Supported & Effects Log Page: Not Supported 00:07:32.624 Feature Identifiers & Effects Log Page:May Support 00:07:32.624 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.624 Data Area 4 for Telemetry Log: Not Supported 00:07:32.624 Error Log Page Entries Supported: 1 00:07:32.625 Keep Alive: Not Supported 00:07:32.625 00:07:32.625 NVM Command Set Attributes 00:07:32.625 ========================== 00:07:32.625 Submission Queue Entry Size 00:07:32.625 Max: 64 00:07:32.625 Min: 64 00:07:32.625 Completion Queue Entry Size 00:07:32.625 Max: 16 00:07:32.625 Min: 16 00:07:32.625 Number of Namespaces: 256 00:07:32.625 Compare Command: Supported 00:07:32.625 Write Uncorrectable Command: Not Supported 00:07:32.625 Dataset Management Command: Supported 00:07:32.625 Write Zeroes Command: Supported 00:07:32.625 Set Features Save Field: Supported 00:07:32.625 Reservations: Not Supported 00:07:32.625 Timestamp: Supported 00:07:32.625 Copy: Supported 00:07:32.625 Volatile Write Cache: Present 00:07:32.625 Atomic Write Unit (Normal): 1 00:07:32.625 Atomic Write Unit (PFail): 1 00:07:32.625 Atomic Compare & Write Unit: 1 00:07:32.625 Fused Compare & Write: Not Supported 00:07:32.625 Scatter-Gather List 00:07:32.625 SGL Command Set: Supported 00:07:32.625 SGL Keyed: Not Supported 00:07:32.625 SGL Bit Bucket Descriptor: Not Supported 00:07:32.625 SGL Metadata Pointer: Not Supported 00:07:32.625 Oversized SGL: Not Supported 00:07:32.625 SGL Metadata Address: Not Supported 00:07:32.625 SGL Offset: Not Supported 00:07:32.625 Transport SGL Data Block: Not Supported 00:07:32.625 Replay Protected Memory Block: Not Supported 00:07:32.625 00:07:32.625 Firmware Slot Information 00:07:32.625 ========================= 00:07:32.625 Active slot: 1 00:07:32.625 Slot 1 Firmware Revision: 1.0 00:07:32.625 00:07:32.625 00:07:32.625 Commands Supported and Effects 00:07:32.625 ============================== 00:07:32.625 Admin Commands 00:07:32.625 -------------- 00:07:32.625 Delete I/O Submission Queue (00h): Supported 00:07:32.625 Create I/O Submission Queue (01h): Supported 00:07:32.625 Get Log Page (02h): Supported 00:07:32.625 Delete I/O Completion Queue (04h): Supported 00:07:32.625 Create I/O Completion Queue (05h): Supported 00:07:32.625 Identify (06h): Supported 00:07:32.625 Abort (08h): Supported 00:07:32.625 Set Features (09h): Supported 00:07:32.625 Get Features (0Ah): Supported 00:07:32.625 Asynchronous Event Request (0Ch): Supported 00:07:32.625 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.625 Directive Send (19h): Supported 00:07:32.625 Directive Receive (1Ah): Supported 00:07:32.625 Virtualization Management (1Ch): Supported 00:07:32.625 Doorbell Buffer Config (7Ch): Supported 00:07:32.625 Format NVM (80h): Supported LBA-Change 00:07:32.625 I/O Commands 00:07:32.625 ------------ 00:07:32.625 Flush (00h): Supported LBA-Change 00:07:32.625 Write (01h): Supported LBA-Change 00:07:32.625 Read (02h): Supported 00:07:32.625 Compare (05h): Supported 00:07:32.625 Write Zeroes (08h): Supported LBA-Change 00:07:32.625 Dataset Management (09h): Supported LBA-Change 00:07:32.625 Unknown (0Ch): Supported 00:07:32.625 Unknown (12h): Supported 00:07:32.625 Copy (19h): Supported LBA-Change 00:07:32.625 Unknown (1Dh): Supported LBA-Change 00:07:32.625 00:07:32.625 Error Log 00:07:32.625 ========= 00:07:32.625 00:07:32.625 Arbitration 00:07:32.625 =========== 00:07:32.625 Arbitration Burst: no limit 00:07:32.625 00:07:32.625 Power Management 00:07:32.625 ================ 00:07:32.625 Number of Power States: 1 00:07:32.625 Current Power State: Power State #0 00:07:32.625 Power State #0: 00:07:32.625 Max Power: 25.00 W 00:07:32.625 Non-Operational State: Operational 00:07:32.625 Entry Latency: 16 microseconds 00:07:32.625 Exit Latency: 4 microseconds 00:07:32.625 Relative Read Throughput: 0 00:07:32.625 Relative Read Latency: 0 00:07:32.625 Relative Write Throughput: 0 00:07:32.625 Relative Write Latency: 0 00:07:32.625 Idle Power: Not Reported 00:07:32.625 Active Power: Not Reported 00:07:32.625 Non-Operational Permissive Mode: Not Supported 00:07:32.625 00:07:32.625 Health Information 00:07:32.625 ================== 00:07:32.625 Critical Warnings: 00:07:32.625 Available Spare Space: OK 00:07:32.625 Temperature: OK 00:07:32.625 Device Reliability: OK 00:07:32.625 Read Only: No 00:07:32.625 Volatile Memory Backup: OK 00:07:32.625 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.625 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.625 Available Spare: 0% 00:07:32.625 Available Spare Threshold: 0% 00:07:32.625 Life Percentage Used: 0% 00:07:32.625 Data Units Read: 2422 00:07:32.625 Data Units Written: 2209 00:07:32.625 Host Read Commands: 129679 00:07:32.625 Host Write Commands: 127948 00:07:32.625 Controller Busy Time: 0 minutes 00:07:32.625 Power Cycles: 0 00:07:32.625 Power On Hours: 0 hours 00:07:32.625 Unsafe Shutdowns: 0 00:07:32.625 Unrecoverable Media Errors: 0 00:07:32.625 Lifetime Error Log Entries: 0 00:07:32.625 Warning Temperature Time: 0 minutes 00:07:32.625 Critical Temperature Time: 0 minutes 00:07:32.625 00:07:32.625 Number of Queues 00:07:32.625 ================ 00:07:32.625 Number of I/O Submission Queues: 64 00:07:32.625 Number of I/O Completion Queues: 64 00:07:32.625 00:07:32.625 ZNS Specific Controller Data 00:07:32.625 ============================ 00:07:32.625 Zone Append Size Limit: 0 00:07:32.625 00:07:32.625 00:07:32.625 Active Namespaces 00:07:32.625 ================= 00:07:32.625 Namespace ID:1 00:07:32.625 Error Recovery Timeout: Unlimited 00:07:32.625 Command Set Identifier: NVM (00h) 00:07:32.625 Deallocate: Supported 00:07:32.625 Deallocated/Unwritten Error: Supported 00:07:32.625 Deallocated Read Value: All 0x00 00:07:32.625 Deallocate in Write Zeroes: Not Supported 00:07:32.625 Deallocated Guard Field: 0xFFFF 00:07:32.625 Flush: Supported 00:07:32.625 Reservation: Not Supported 00:07:32.625 Namespace Sharing Capabilities: Private 00:07:32.625 Size (in LBAs): 1048576 (4GiB) 00:07:32.625 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.625 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.625 Thin Provisioning: Not Supported 00:07:32.625 Per-NS Atomic Units: No 00:07:32.625 Maximum Single Source Range Length: 128 00:07:32.625 Maximum Copy Length: 128 00:07:32.625 Maximum Source Range Count: 128 00:07:32.625 NGUID/EUI64 Never Reused: No 00:07:32.625 Namespace Write Protected: No 00:07:32.625 Number of LBA Formats: 8 00:07:32.625 Current LBA Format: LBA Format #04 00:07:32.625 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.625 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.625 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.625 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.625 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.625 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.625 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.625 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.625 00:07:32.625 NVM Specific Namespace Data 00:07:32.625 =========================== 00:07:32.625 Logical Block Storage Tag Mask: 0 00:07:32.625 Protection Information Capabilities: 00:07:32.625 16b Guard Protection Information Storage Tag Support: No 00:07:32.625 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.625 Storage Tag Check Read Support: No 00:07:32.625 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.625 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.625 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.625 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.625 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.625 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.625 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.625 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.625 Namespace ID:2 00:07:32.625 Error Recovery Timeout: Unlimited 00:07:32.625 Command Set Identifier: NVM (00h) 00:07:32.625 Deallocate: Supported 00:07:32.625 Deallocated/Unwritten Error: Supported 00:07:32.625 Deallocated Read Value: All 0x00 00:07:32.625 Deallocate in Write Zeroes: Not Supported 00:07:32.625 Deallocated Guard Field: 0xFFFF 00:07:32.625 Flush: Supported 00:07:32.625 Reservation: Not Supported 00:07:32.625 Namespace Sharing Capabilities: Private 00:07:32.625 Size (in LBAs): 1048576 (4GiB) 00:07:32.625 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.625 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.625 Thin Provisioning: Not Supported 00:07:32.625 Per-NS Atomic Units: No 00:07:32.625 Maximum Single Source Range Length: 128 00:07:32.625 Maximum Copy Length: 128 00:07:32.625 Maximum Source Range Count: 128 00:07:32.625 NGUID/EUI64 Never Reused: No 00:07:32.625 Namespace Write Protected: No 00:07:32.625 Number of LBA Formats: 8 00:07:32.625 Current LBA Format: LBA Format #04 00:07:32.625 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.625 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.625 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.625 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.625 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.625 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.625 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.625 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.625 00:07:32.625 NVM Specific Namespace Data 00:07:32.626 =========================== 00:07:32.626 Logical Block Storage Tag Mask: 0 00:07:32.626 Protection Information Capabilities: 00:07:32.626 16b Guard Protection Information Storage Tag Support: No 00:07:32.626 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.626 Storage Tag Check Read Support: No 00:07:32.626 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Namespace ID:3 00:07:32.626 Error Recovery Timeout: Unlimited 00:07:32.626 Command Set Identifier: NVM (00h) 00:07:32.626 Deallocate: Supported 00:07:32.626 Deallocated/Unwritten Error: Supported 00:07:32.626 Deallocated Read Value: All 0x00 00:07:32.626 Deallocate in Write Zeroes: Not Supported 00:07:32.626 Deallocated Guard Field: 0xFFFF 00:07:32.626 Flush: Supported 00:07:32.626 Reservation: Not Supported 00:07:32.626 Namespace Sharing Capabilities: Private 00:07:32.626 Size (in LBAs): 1048576 (4GiB) 00:07:32.626 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.626 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.626 Thin Provisioning: Not Supported 00:07:32.626 Per-NS Atomic Units: No 00:07:32.626 Maximum Single Source Range Length: 128 00:07:32.626 Maximum Copy Length: 128 00:07:32.626 Maximum Source Range Count: 128 00:07:32.626 NGUID/EUI64 Never Reused: No 00:07:32.626 Namespace Write Protected: No 00:07:32.626 Number of LBA Formats: 8 00:07:32.626 Current LBA Format: LBA Format #04 00:07:32.626 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.626 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.626 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.626 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.626 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.626 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.626 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.626 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.626 00:07:32.626 NVM Specific Namespace Data 00:07:32.626 =========================== 00:07:32.626 Logical Block Storage Tag Mask: 0 00:07:32.626 Protection Information Capabilities: 00:07:32.626 16b Guard Protection Information Storage Tag Support: No 00:07:32.626 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.626 Storage Tag Check Read Support: No 00:07:32.626 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.626 09:13:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.626 09:13:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:32.885 ===================================================== 00:07:32.885 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:32.885 ===================================================== 00:07:32.885 Controller Capabilities/Features 00:07:32.885 ================================ 00:07:32.885 Vendor ID: 1b36 00:07:32.885 Subsystem Vendor ID: 1af4 00:07:32.885 Serial Number: 12340 00:07:32.885 Model Number: QEMU NVMe Ctrl 00:07:32.885 Firmware Version: 8.0.0 00:07:32.885 Recommended Arb Burst: 6 00:07:32.885 IEEE OUI Identifier: 00 54 52 00:07:32.885 Multi-path I/O 00:07:32.885 May have multiple subsystem ports: No 00:07:32.885 May have multiple controllers: No 00:07:32.885 Associated with SR-IOV VF: No 00:07:32.885 Max Data Transfer Size: 524288 00:07:32.885 Max Number of Namespaces: 256 00:07:32.885 Max Number of I/O Queues: 64 00:07:32.885 NVMe Specification Version (VS): 1.4 00:07:32.885 NVMe Specification Version (Identify): 1.4 00:07:32.885 Maximum Queue Entries: 2048 00:07:32.885 Contiguous Queues Required: Yes 00:07:32.885 Arbitration Mechanisms Supported 00:07:32.885 Weighted Round Robin: Not Supported 00:07:32.885 Vendor Specific: Not Supported 00:07:32.885 Reset Timeout: 7500 ms 00:07:32.885 Doorbell Stride: 4 bytes 00:07:32.885 NVM Subsystem Reset: Not Supported 00:07:32.885 Command Sets Supported 00:07:32.885 NVM Command Set: Supported 00:07:32.885 Boot Partition: Not Supported 00:07:32.885 Memory Page Size Minimum: 4096 bytes 00:07:32.885 Memory Page Size Maximum: 65536 bytes 00:07:32.885 Persistent Memory Region: Not Supported 00:07:32.885 Optional Asynchronous Events Supported 00:07:32.885 Namespace Attribute Notices: Supported 00:07:32.885 Firmware Activation Notices: Not Supported 00:07:32.885 ANA Change Notices: Not Supported 00:07:32.885 PLE Aggregate Log Change Notices: Not Supported 00:07:32.885 LBA Status Info Alert Notices: Not Supported 00:07:32.885 EGE Aggregate Log Change Notices: Not Supported 00:07:32.885 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.885 Zone Descriptor Change Notices: Not Supported 00:07:32.885 Discovery Log Change Notices: Not Supported 00:07:32.885 Controller Attributes 00:07:32.885 128-bit Host Identifier: Not Supported 00:07:32.885 Non-Operational Permissive Mode: Not Supported 00:07:32.885 NVM Sets: Not Supported 00:07:32.885 Read Recovery Levels: Not Supported 00:07:32.885 Endurance Groups: Not Supported 00:07:32.885 Predictable Latency Mode: Not Supported 00:07:32.885 Traffic Based Keep ALive: Not Supported 00:07:32.886 Namespace Granularity: Not Supported 00:07:32.886 SQ Associations: Not Supported 00:07:32.886 UUID List: Not Supported 00:07:32.886 Multi-Domain Subsystem: Not Supported 00:07:32.886 Fixed Capacity Management: Not Supported 00:07:32.886 Variable Capacity Management: Not Supported 00:07:32.886 Delete Endurance Group: Not Supported 00:07:32.886 Delete NVM Set: Not Supported 00:07:32.886 Extended LBA Formats Supported: Supported 00:07:32.886 Flexible Data Placement Supported: Not Supported 00:07:32.886 00:07:32.886 Controller Memory Buffer Support 00:07:32.886 ================================ 00:07:32.886 Supported: No 00:07:32.886 00:07:32.886 Persistent Memory Region Support 00:07:32.886 ================================ 00:07:32.886 Supported: No 00:07:32.886 00:07:32.886 Admin Command Set Attributes 00:07:32.886 ============================ 00:07:32.886 Security Send/Receive: Not Supported 00:07:32.886 Format NVM: Supported 00:07:32.886 Firmware Activate/Download: Not Supported 00:07:32.886 Namespace Management: Supported 00:07:32.886 Device Self-Test: Not Supported 00:07:32.886 Directives: Supported 00:07:32.886 NVMe-MI: Not Supported 00:07:32.886 Virtualization Management: Not Supported 00:07:32.886 Doorbell Buffer Config: Supported 00:07:32.886 Get LBA Status Capability: Not Supported 00:07:32.886 Command & Feature Lockdown Capability: Not Supported 00:07:32.886 Abort Command Limit: 4 00:07:32.886 Async Event Request Limit: 4 00:07:32.886 Number of Firmware Slots: N/A 00:07:32.886 Firmware Slot 1 Read-Only: N/A 00:07:32.886 Firmware Activation Without Reset: N/A 00:07:32.886 Multiple Update Detection Support: N/A 00:07:32.886 Firmware Update Granularity: No Information Provided 00:07:32.886 Per-Namespace SMART Log: Yes 00:07:32.886 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.886 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:32.886 Command Effects Log Page: Supported 00:07:32.886 Get Log Page Extended Data: Supported 00:07:32.886 Telemetry Log Pages: Not Supported 00:07:32.886 Persistent Event Log Pages: Not Supported 00:07:32.886 Supported Log Pages Log Page: May Support 00:07:32.886 Commands Supported & Effects Log Page: Not Supported 00:07:32.886 Feature Identifiers & Effects Log Page:May Support 00:07:32.886 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.886 Data Area 4 for Telemetry Log: Not Supported 00:07:32.886 Error Log Page Entries Supported: 1 00:07:32.886 Keep Alive: Not Supported 00:07:32.886 00:07:32.886 NVM Command Set Attributes 00:07:32.886 ========================== 00:07:32.886 Submission Queue Entry Size 00:07:32.886 Max: 64 00:07:32.886 Min: 64 00:07:32.886 Completion Queue Entry Size 00:07:32.886 Max: 16 00:07:32.886 Min: 16 00:07:32.886 Number of Namespaces: 256 00:07:32.886 Compare Command: Supported 00:07:32.886 Write Uncorrectable Command: Not Supported 00:07:32.886 Dataset Management Command: Supported 00:07:32.886 Write Zeroes Command: Supported 00:07:32.886 Set Features Save Field: Supported 00:07:32.886 Reservations: Not Supported 00:07:32.886 Timestamp: Supported 00:07:32.886 Copy: Supported 00:07:32.886 Volatile Write Cache: Present 00:07:32.886 Atomic Write Unit (Normal): 1 00:07:32.886 Atomic Write Unit (PFail): 1 00:07:32.886 Atomic Compare & Write Unit: 1 00:07:32.886 Fused Compare & Write: Not Supported 00:07:32.886 Scatter-Gather List 00:07:32.886 SGL Command Set: Supported 00:07:32.886 SGL Keyed: Not Supported 00:07:32.886 SGL Bit Bucket Descriptor: Not Supported 00:07:32.886 SGL Metadata Pointer: Not Supported 00:07:32.886 Oversized SGL: Not Supported 00:07:32.886 SGL Metadata Address: Not Supported 00:07:32.886 SGL Offset: Not Supported 00:07:32.886 Transport SGL Data Block: Not Supported 00:07:32.886 Replay Protected Memory Block: Not Supported 00:07:32.886 00:07:32.886 Firmware Slot Information 00:07:32.886 ========================= 00:07:32.886 Active slot: 1 00:07:32.886 Slot 1 Firmware Revision: 1.0 00:07:32.886 00:07:32.886 00:07:32.886 Commands Supported and Effects 00:07:32.886 ============================== 00:07:32.886 Admin Commands 00:07:32.886 -------------- 00:07:32.886 Delete I/O Submission Queue (00h): Supported 00:07:32.886 Create I/O Submission Queue (01h): Supported 00:07:32.886 Get Log Page (02h): Supported 00:07:32.886 Delete I/O Completion Queue (04h): Supported 00:07:32.886 Create I/O Completion Queue (05h): Supported 00:07:32.886 Identify (06h): Supported 00:07:32.886 Abort (08h): Supported 00:07:32.886 Set Features (09h): Supported 00:07:32.886 Get Features (0Ah): Supported 00:07:32.886 Asynchronous Event Request (0Ch): Supported 00:07:32.886 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.886 Directive Send (19h): Supported 00:07:32.886 Directive Receive (1Ah): Supported 00:07:32.886 Virtualization Management (1Ch): Supported 00:07:32.886 Doorbell Buffer Config (7Ch): Supported 00:07:32.886 Format NVM (80h): Supported LBA-Change 00:07:32.886 I/O Commands 00:07:32.886 ------------ 00:07:32.886 Flush (00h): Supported LBA-Change 00:07:32.886 Write (01h): Supported LBA-Change 00:07:32.886 Read (02h): Supported 00:07:32.886 Compare (05h): Supported 00:07:32.886 Write Zeroes (08h): Supported LBA-Change 00:07:32.886 Dataset Management (09h): Supported LBA-Change 00:07:32.886 Unknown (0Ch): Supported 00:07:32.886 Unknown (12h): Supported 00:07:32.886 Copy (19h): Supported LBA-Change 00:07:32.886 Unknown (1Dh): Supported LBA-Change 00:07:32.886 00:07:32.886 Error Log 00:07:32.886 ========= 00:07:32.886 00:07:32.886 Arbitration 00:07:32.886 =========== 00:07:32.886 Arbitration Burst: no limit 00:07:32.886 00:07:32.886 Power Management 00:07:32.886 ================ 00:07:32.886 Number of Power States: 1 00:07:32.886 Current Power State: Power State #0 00:07:32.886 Power State #0: 00:07:32.886 Max Power: 25.00 W 00:07:32.886 Non-Operational State: Operational 00:07:32.886 Entry Latency: 16 microseconds 00:07:32.886 Exit Latency: 4 microseconds 00:07:32.886 Relative Read Throughput: 0 00:07:32.886 Relative Read Latency: 0 00:07:32.886 Relative Write Throughput: 0 00:07:32.886 Relative Write Latency: 0 00:07:32.886 Idle Power: Not Reported 00:07:32.886 Active Power: Not Reported 00:07:32.886 Non-Operational Permissive Mode: Not Supported 00:07:32.886 00:07:32.886 Health Information 00:07:32.886 ================== 00:07:32.886 Critical Warnings: 00:07:32.886 Available Spare Space: OK 00:07:32.886 Temperature: OK 00:07:32.886 Device Reliability: OK 00:07:32.886 Read Only: No 00:07:32.886 Volatile Memory Backup: OK 00:07:32.886 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.886 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.886 Available Spare: 0% 00:07:32.886 Available Spare Threshold: 0% 00:07:32.886 Life Percentage Used: 0% 00:07:32.886 Data Units Read: 748 00:07:32.886 Data Units Written: 677 00:07:32.886 Host Read Commands: 42406 00:07:32.886 Host Write Commands: 42192 00:07:32.886 Controller Busy Time: 0 minutes 00:07:32.886 Power Cycles: 0 00:07:32.886 Power On Hours: 0 hours 00:07:32.886 Unsafe Shutdowns: 0 00:07:32.886 Unrecoverable Media Errors: 0 00:07:32.886 Lifetime Error Log Entries: 0 00:07:32.886 Warning Temperature Time: 0 minutes 00:07:32.886 Critical Temperature Time: 0 minutes 00:07:32.886 00:07:32.886 Number of Queues 00:07:32.886 ================ 00:07:32.886 Number of I/O Submission Queues: 64 00:07:32.886 Number of I/O Completion Queues: 64 00:07:32.886 00:07:32.886 ZNS Specific Controller Data 00:07:32.886 ============================ 00:07:32.886 Zone Append Size Limit: 0 00:07:32.886 00:07:32.886 00:07:32.886 Active Namespaces 00:07:32.886 ================= 00:07:32.886 Namespace ID:1 00:07:32.886 Error Recovery Timeout: Unlimited 00:07:32.886 Command Set Identifier: NVM (00h) 00:07:32.886 Deallocate: Supported 00:07:32.886 Deallocated/Unwritten Error: Supported 00:07:32.886 Deallocated Read Value: All 0x00 00:07:32.886 Deallocate in Write Zeroes: Not Supported 00:07:32.886 Deallocated Guard Field: 0xFFFF 00:07:32.886 Flush: Supported 00:07:32.886 Reservation: Not Supported 00:07:32.886 Metadata Transferred as: Separate Metadata Buffer 00:07:32.887 Namespace Sharing Capabilities: Private 00:07:32.887 Size (in LBAs): 1548666 (5GiB) 00:07:32.887 Capacity (in LBAs): 1548666 (5GiB) 00:07:32.887 Utilization (in LBAs): 1548666 (5GiB) 00:07:32.887 Thin Provisioning: Not Supported 00:07:32.887 Per-NS Atomic Units: No 00:07:32.887 Maximum Single Source Range Length: 128 00:07:32.887 Maximum Copy Length: 128 00:07:32.887 Maximum Source Range Count: 128 00:07:32.887 NGUID/EUI64 Never Reused: No 00:07:32.887 Namespace Write Protected: No 00:07:32.887 Number of LBA Formats: 8 00:07:32.887 Current LBA Format: LBA Format #07 00:07:32.887 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.887 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.887 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.887 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.887 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.887 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.887 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.887 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.887 00:07:32.887 NVM Specific Namespace Data 00:07:32.887 =========================== 00:07:32.887 Logical Block Storage Tag Mask: 0 00:07:32.887 Protection Information Capabilities: 00:07:32.887 16b Guard Protection Information Storage Tag Support: No 00:07:32.887 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.887 Storage Tag Check Read Support: No 00:07:32.887 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.887 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.887 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.887 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.887 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.887 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.887 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.887 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.887 09:13:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.887 09:13:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:32.887 ===================================================== 00:07:32.887 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:32.887 ===================================================== 00:07:32.887 Controller Capabilities/Features 00:07:32.887 ================================ 00:07:32.887 Vendor ID: 1b36 00:07:32.887 Subsystem Vendor ID: 1af4 00:07:32.887 Serial Number: 12341 00:07:32.887 Model Number: QEMU NVMe Ctrl 00:07:32.887 Firmware Version: 8.0.0 00:07:32.887 Recommended Arb Burst: 6 00:07:32.887 IEEE OUI Identifier: 00 54 52 00:07:32.887 Multi-path I/O 00:07:32.887 May have multiple subsystem ports: No 00:07:32.887 May have multiple controllers: No 00:07:32.887 Associated with SR-IOV VF: No 00:07:32.887 Max Data Transfer Size: 524288 00:07:32.887 Max Number of Namespaces: 256 00:07:32.887 Max Number of I/O Queues: 64 00:07:32.887 NVMe Specification Version (VS): 1.4 00:07:32.887 NVMe Specification Version (Identify): 1.4 00:07:32.887 Maximum Queue Entries: 2048 00:07:32.887 Contiguous Queues Required: Yes 00:07:32.887 Arbitration Mechanisms Supported 00:07:32.887 Weighted Round Robin: Not Supported 00:07:32.887 Vendor Specific: Not Supported 00:07:32.887 Reset Timeout: 7500 ms 00:07:32.887 Doorbell Stride: 4 bytes 00:07:32.887 NVM Subsystem Reset: Not Supported 00:07:32.887 Command Sets Supported 00:07:32.887 NVM Command Set: Supported 00:07:32.887 Boot Partition: Not Supported 00:07:32.887 Memory Page Size Minimum: 4096 bytes 00:07:32.887 Memory Page Size Maximum: 65536 bytes 00:07:32.887 Persistent Memory Region: Not Supported 00:07:32.887 Optional Asynchronous Events Supported 00:07:32.887 Namespace Attribute Notices: Supported 00:07:32.887 Firmware Activation Notices: Not Supported 00:07:32.887 ANA Change Notices: Not Supported 00:07:32.887 PLE Aggregate Log Change Notices: Not Supported 00:07:32.887 LBA Status Info Alert Notices: Not Supported 00:07:32.887 EGE Aggregate Log Change Notices: Not Supported 00:07:32.887 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.887 Zone Descriptor Change Notices: Not Supported 00:07:32.887 Discovery Log Change Notices: Not Supported 00:07:32.887 Controller Attributes 00:07:32.887 128-bit Host Identifier: Not Supported 00:07:32.887 Non-Operational Permissive Mode: Not Supported 00:07:32.887 NVM Sets: Not Supported 00:07:32.887 Read Recovery Levels: Not Supported 00:07:32.887 Endurance Groups: Not Supported 00:07:32.887 Predictable Latency Mode: Not Supported 00:07:32.887 Traffic Based Keep ALive: Not Supported 00:07:32.887 Namespace Granularity: Not Supported 00:07:32.887 SQ Associations: Not Supported 00:07:32.887 UUID List: Not Supported 00:07:32.887 Multi-Domain Subsystem: Not Supported 00:07:32.887 Fixed Capacity Management: Not Supported 00:07:32.887 Variable Capacity Management: Not Supported 00:07:32.887 Delete Endurance Group: Not Supported 00:07:32.887 Delete NVM Set: Not Supported 00:07:32.887 Extended LBA Formats Supported: Supported 00:07:32.887 Flexible Data Placement Supported: Not Supported 00:07:32.887 00:07:32.887 Controller Memory Buffer Support 00:07:32.887 ================================ 00:07:32.887 Supported: No 00:07:32.887 00:07:32.887 Persistent Memory Region Support 00:07:32.887 ================================ 00:07:32.887 Supported: No 00:07:32.887 00:07:32.887 Admin Command Set Attributes 00:07:32.887 ============================ 00:07:32.887 Security Send/Receive: Not Supported 00:07:32.887 Format NVM: Supported 00:07:32.887 Firmware Activate/Download: Not Supported 00:07:32.887 Namespace Management: Supported 00:07:32.887 Device Self-Test: Not Supported 00:07:32.887 Directives: Supported 00:07:32.887 NVMe-MI: Not Supported 00:07:32.887 Virtualization Management: Not Supported 00:07:32.887 Doorbell Buffer Config: Supported 00:07:32.887 Get LBA Status Capability: Not Supported 00:07:32.887 Command & Feature Lockdown Capability: Not Supported 00:07:32.887 Abort Command Limit: 4 00:07:32.887 Async Event Request Limit: 4 00:07:32.887 Number of Firmware Slots: N/A 00:07:32.887 Firmware Slot 1 Read-Only: N/A 00:07:32.887 Firmware Activation Without Reset: N/A 00:07:32.887 Multiple Update Detection Support: N/A 00:07:32.887 Firmware Update Granularity: No Information Provided 00:07:32.887 Per-Namespace SMART Log: Yes 00:07:32.887 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.887 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:32.887 Command Effects Log Page: Supported 00:07:32.887 Get Log Page Extended Data: Supported 00:07:32.887 Telemetry Log Pages: Not Supported 00:07:32.887 Persistent Event Log Pages: Not Supported 00:07:32.887 Supported Log Pages Log Page: May Support 00:07:32.887 Commands Supported & Effects Log Page: Not Supported 00:07:32.887 Feature Identifiers & Effects Log Page:May Support 00:07:32.887 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.887 Data Area 4 for Telemetry Log: Not Supported 00:07:32.887 Error Log Page Entries Supported: 1 00:07:32.887 Keep Alive: Not Supported 00:07:32.887 00:07:32.887 NVM Command Set Attributes 00:07:32.887 ========================== 00:07:32.887 Submission Queue Entry Size 00:07:32.887 Max: 64 00:07:32.887 Min: 64 00:07:32.887 Completion Queue Entry Size 00:07:32.887 Max: 16 00:07:32.887 Min: 16 00:07:32.887 Number of Namespaces: 256 00:07:32.887 Compare Command: Supported 00:07:32.887 Write Uncorrectable Command: Not Supported 00:07:32.887 Dataset Management Command: Supported 00:07:32.887 Write Zeroes Command: Supported 00:07:32.887 Set Features Save Field: Supported 00:07:32.887 Reservations: Not Supported 00:07:32.887 Timestamp: Supported 00:07:32.887 Copy: Supported 00:07:32.887 Volatile Write Cache: Present 00:07:32.887 Atomic Write Unit (Normal): 1 00:07:32.887 Atomic Write Unit (PFail): 1 00:07:32.887 Atomic Compare & Write Unit: 1 00:07:32.887 Fused Compare & Write: Not Supported 00:07:32.887 Scatter-Gather List 00:07:32.887 SGL Command Set: Supported 00:07:32.887 SGL Keyed: Not Supported 00:07:32.887 SGL Bit Bucket Descriptor: Not Supported 00:07:32.887 SGL Metadata Pointer: Not Supported 00:07:32.887 Oversized SGL: Not Supported 00:07:32.887 SGL Metadata Address: Not Supported 00:07:32.887 SGL Offset: Not Supported 00:07:32.887 Transport SGL Data Block: Not Supported 00:07:32.887 Replay Protected Memory Block: Not Supported 00:07:32.887 00:07:32.887 Firmware Slot Information 00:07:32.887 ========================= 00:07:32.887 Active slot: 1 00:07:32.887 Slot 1 Firmware Revision: 1.0 00:07:32.887 00:07:32.887 00:07:32.887 Commands Supported and Effects 00:07:32.887 ============================== 00:07:32.887 Admin Commands 00:07:32.887 -------------- 00:07:32.887 Delete I/O Submission Queue (00h): Supported 00:07:32.887 Create I/O Submission Queue (01h): Supported 00:07:32.887 Get Log Page (02h): Supported 00:07:32.887 Delete I/O Completion Queue (04h): Supported 00:07:32.887 Create I/O Completion Queue (05h): Supported 00:07:32.887 Identify (06h): Supported 00:07:32.887 Abort (08h): Supported 00:07:32.887 Set Features (09h): Supported 00:07:32.887 Get Features (0Ah): Supported 00:07:32.887 Asynchronous Event Request (0Ch): Supported 00:07:32.887 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.887 Directive Send (19h): Supported 00:07:32.887 Directive Receive (1Ah): Supported 00:07:32.888 Virtualization Management (1Ch): Supported 00:07:32.888 Doorbell Buffer Config (7Ch): Supported 00:07:32.888 Format NVM (80h): Supported LBA-Change 00:07:32.888 I/O Commands 00:07:32.888 ------------ 00:07:32.888 Flush (00h): Supported LBA-Change 00:07:32.888 Write (01h): Supported LBA-Change 00:07:32.888 Read (02h): Supported 00:07:32.888 Compare (05h): Supported 00:07:32.888 Write Zeroes (08h): Supported LBA-Change 00:07:32.888 Dataset Management (09h): Supported LBA-Change 00:07:32.888 Unknown (0Ch): Supported 00:07:32.888 Unknown (12h): Supported 00:07:32.888 Copy (19h): Supported LBA-Change 00:07:32.888 Unknown (1Dh): Supported LBA-Change 00:07:32.888 00:07:32.888 Error Log 00:07:32.888 ========= 00:07:32.888 00:07:32.888 Arbitration 00:07:32.888 =========== 00:07:32.888 Arbitration Burst: no limit 00:07:32.888 00:07:32.888 Power Management 00:07:32.888 ================ 00:07:32.888 Number of Power States: 1 00:07:32.888 Current Power State: Power State #0 00:07:32.888 Power State #0: 00:07:32.888 Max Power: 25.00 W 00:07:32.888 Non-Operational State: Operational 00:07:32.888 Entry Latency: 16 microseconds 00:07:32.888 Exit Latency: 4 microseconds 00:07:32.888 Relative Read Throughput: 0 00:07:32.888 Relative Read Latency: 0 00:07:32.888 Relative Write Throughput: 0 00:07:32.888 Relative Write Latency: 0 00:07:33.147 Idle Power: Not Reported 00:07:33.147 Active Power: Not Reported 00:07:33.147 Non-Operational Permissive Mode: Not Supported 00:07:33.147 00:07:33.147 Health Information 00:07:33.147 ================== 00:07:33.147 Critical Warnings: 00:07:33.147 Available Spare Space: OK 00:07:33.147 Temperature: OK 00:07:33.147 Device Reliability: OK 00:07:33.147 Read Only: No 00:07:33.147 Volatile Memory Backup: OK 00:07:33.147 Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.147 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:33.147 Available Spare: 0% 00:07:33.147 Available Spare Threshold: 0% 00:07:33.147 Life Percentage Used: 0% 00:07:33.147 Data Units Read: 1157 00:07:33.147 Data Units Written: 1024 00:07:33.147 Host Read Commands: 62913 00:07:33.147 Host Write Commands: 61709 00:07:33.147 Controller Busy Time: 0 minutes 00:07:33.147 Power Cycles: 0 00:07:33.147 Power On Hours: 0 hours 00:07:33.147 Unsafe Shutdowns: 0 00:07:33.147 Unrecoverable Media Errors: 0 00:07:33.147 Lifetime Error Log Entries: 0 00:07:33.147 Warning Temperature Time: 0 minutes 00:07:33.147 Critical Temperature Time: 0 minutes 00:07:33.147 00:07:33.147 Number of Queues 00:07:33.147 ================ 00:07:33.147 Number of I/O Submission Queues: 64 00:07:33.147 Number of I/O Completion Queues: 64 00:07:33.147 00:07:33.147 ZNS Specific Controller Data 00:07:33.147 ============================ 00:07:33.147 Zone Append Size Limit: 0 00:07:33.147 00:07:33.147 00:07:33.147 Active Namespaces 00:07:33.147 ================= 00:07:33.147 Namespace ID:1 00:07:33.147 Error Recovery Timeout: Unlimited 00:07:33.147 Command Set Identifier: NVM (00h) 00:07:33.147 Deallocate: Supported 00:07:33.147 Deallocated/Unwritten Error: Supported 00:07:33.147 Deallocated Read Value: All 0x00 00:07:33.147 Deallocate in Write Zeroes: Not Supported 00:07:33.147 Deallocated Guard Field: 0xFFFF 00:07:33.147 Flush: Supported 00:07:33.147 Reservation: Not Supported 00:07:33.147 Namespace Sharing Capabilities: Private 00:07:33.147 Size (in LBAs): 1310720 (5GiB) 00:07:33.147 Capacity (in LBAs): 1310720 (5GiB) 00:07:33.147 Utilization (in LBAs): 1310720 (5GiB) 00:07:33.147 Thin Provisioning: Not Supported 00:07:33.147 Per-NS Atomic Units: No 00:07:33.147 Maximum Single Source Range Length: 128 00:07:33.147 Maximum Copy Length: 128 00:07:33.147 Maximum Source Range Count: 128 00:07:33.147 NGUID/EUI64 Never Reused: No 00:07:33.147 Namespace Write Protected: No 00:07:33.147 Number of LBA Formats: 8 00:07:33.147 Current LBA Format: LBA Format #04 00:07:33.147 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.147 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.147 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.147 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.147 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.147 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.147 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.147 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.147 00:07:33.147 NVM Specific Namespace Data 00:07:33.147 =========================== 00:07:33.147 Logical Block Storage Tag Mask: 0 00:07:33.147 Protection Information Capabilities: 00:07:33.147 16b Guard Protection Information Storage Tag Support: No 00:07:33.147 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.147 Storage Tag Check Read Support: No 00:07:33.147 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.147 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.147 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.147 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.147 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.147 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.147 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.147 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.147 09:13:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:33.147 09:13:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:33.147 ===================================================== 00:07:33.147 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:33.147 ===================================================== 00:07:33.147 Controller Capabilities/Features 00:07:33.147 ================================ 00:07:33.147 Vendor ID: 1b36 00:07:33.147 Subsystem Vendor ID: 1af4 00:07:33.147 Serial Number: 12342 00:07:33.147 Model Number: QEMU NVMe Ctrl 00:07:33.147 Firmware Version: 8.0.0 00:07:33.147 Recommended Arb Burst: 6 00:07:33.147 IEEE OUI Identifier: 00 54 52 00:07:33.147 Multi-path I/O 00:07:33.147 May have multiple subsystem ports: No 00:07:33.147 May have multiple controllers: No 00:07:33.147 Associated with SR-IOV VF: No 00:07:33.147 Max Data Transfer Size: 524288 00:07:33.147 Max Number of Namespaces: 256 00:07:33.147 Max Number of I/O Queues: 64 00:07:33.147 NVMe Specification Version (VS): 1.4 00:07:33.147 NVMe Specification Version (Identify): 1.4 00:07:33.147 Maximum Queue Entries: 2048 00:07:33.147 Contiguous Queues Required: Yes 00:07:33.147 Arbitration Mechanisms Supported 00:07:33.147 Weighted Round Robin: Not Supported 00:07:33.147 Vendor Specific: Not Supported 00:07:33.147 Reset Timeout: 7500 ms 00:07:33.147 Doorbell Stride: 4 bytes 00:07:33.147 NVM Subsystem Reset: Not Supported 00:07:33.147 Command Sets Supported 00:07:33.147 NVM Command Set: Supported 00:07:33.147 Boot Partition: Not Supported 00:07:33.147 Memory Page Size Minimum: 4096 bytes 00:07:33.147 Memory Page Size Maximum: 65536 bytes 00:07:33.147 Persistent Memory Region: Not Supported 00:07:33.147 Optional Asynchronous Events Supported 00:07:33.147 Namespace Attribute Notices: Supported 00:07:33.147 Firmware Activation Notices: Not Supported 00:07:33.147 ANA Change Notices: Not Supported 00:07:33.147 PLE Aggregate Log Change Notices: Not Supported 00:07:33.147 LBA Status Info Alert Notices: Not Supported 00:07:33.147 EGE Aggregate Log Change Notices: Not Supported 00:07:33.147 Normal NVM Subsystem Shutdown event: Not Supported 00:07:33.147 Zone Descriptor Change Notices: Not Supported 00:07:33.147 Discovery Log Change Notices: Not Supported 00:07:33.147 Controller Attributes 00:07:33.147 128-bit Host Identifier: Not Supported 00:07:33.147 Non-Operational Permissive Mode: Not Supported 00:07:33.147 NVM Sets: Not Supported 00:07:33.147 Read Recovery Levels: Not Supported 00:07:33.147 Endurance Groups: Not Supported 00:07:33.147 Predictable Latency Mode: Not Supported 00:07:33.147 Traffic Based Keep ALive: Not Supported 00:07:33.147 Namespace Granularity: Not Supported 00:07:33.147 SQ Associations: Not Supported 00:07:33.147 UUID List: Not Supported 00:07:33.147 Multi-Domain Subsystem: Not Supported 00:07:33.147 Fixed Capacity Management: Not Supported 00:07:33.147 Variable Capacity Management: Not Supported 00:07:33.147 Delete Endurance Group: Not Supported 00:07:33.147 Delete NVM Set: Not Supported 00:07:33.147 Extended LBA Formats Supported: Supported 00:07:33.147 Flexible Data Placement Supported: Not Supported 00:07:33.147 00:07:33.147 Controller Memory Buffer Support 00:07:33.147 ================================ 00:07:33.147 Supported: No 00:07:33.147 00:07:33.147 Persistent Memory Region Support 00:07:33.147 ================================ 00:07:33.147 Supported: No 00:07:33.147 00:07:33.147 Admin Command Set Attributes 00:07:33.147 ============================ 00:07:33.147 Security Send/Receive: Not Supported 00:07:33.147 Format NVM: Supported 00:07:33.147 Firmware Activate/Download: Not Supported 00:07:33.147 Namespace Management: Supported 00:07:33.147 Device Self-Test: Not Supported 00:07:33.147 Directives: Supported 00:07:33.147 NVMe-MI: Not Supported 00:07:33.147 Virtualization Management: Not Supported 00:07:33.147 Doorbell Buffer Config: Supported 00:07:33.147 Get LBA Status Capability: Not Supported 00:07:33.147 Command & Feature Lockdown Capability: Not Supported 00:07:33.147 Abort Command Limit: 4 00:07:33.147 Async Event Request Limit: 4 00:07:33.147 Number of Firmware Slots: N/A 00:07:33.147 Firmware Slot 1 Read-Only: N/A 00:07:33.147 Firmware Activation Without Reset: N/A 00:07:33.147 Multiple Update Detection Support: N/A 00:07:33.147 Firmware Update Granularity: No Information Provided 00:07:33.147 Per-Namespace SMART Log: Yes 00:07:33.147 Asymmetric Namespace Access Log Page: Not Supported 00:07:33.148 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:33.148 Command Effects Log Page: Supported 00:07:33.148 Get Log Page Extended Data: Supported 00:07:33.148 Telemetry Log Pages: Not Supported 00:07:33.148 Persistent Event Log Pages: Not Supported 00:07:33.148 Supported Log Pages Log Page: May Support 00:07:33.148 Commands Supported & Effects Log Page: Not Supported 00:07:33.148 Feature Identifiers & Effects Log Page:May Support 00:07:33.148 NVMe-MI Commands & Effects Log Page: May Support 00:07:33.148 Data Area 4 for Telemetry Log: Not Supported 00:07:33.148 Error Log Page Entries Supported: 1 00:07:33.148 Keep Alive: Not Supported 00:07:33.148 00:07:33.148 NVM Command Set Attributes 00:07:33.148 ========================== 00:07:33.148 Submission Queue Entry Size 00:07:33.148 Max: 64 00:07:33.148 Min: 64 00:07:33.148 Completion Queue Entry Size 00:07:33.148 Max: 16 00:07:33.148 Min: 16 00:07:33.148 Number of Namespaces: 256 00:07:33.148 Compare Command: Supported 00:07:33.148 Write Uncorrectable Command: Not Supported 00:07:33.148 Dataset Management Command: Supported 00:07:33.148 Write Zeroes Command: Supported 00:07:33.148 Set Features Save Field: Supported 00:07:33.148 Reservations: Not Supported 00:07:33.148 Timestamp: Supported 00:07:33.148 Copy: Supported 00:07:33.148 Volatile Write Cache: Present 00:07:33.148 Atomic Write Unit (Normal): 1 00:07:33.148 Atomic Write Unit (PFail): 1 00:07:33.148 Atomic Compare & Write Unit: 1 00:07:33.148 Fused Compare & Write: Not Supported 00:07:33.148 Scatter-Gather List 00:07:33.148 SGL Command Set: Supported 00:07:33.148 SGL Keyed: Not Supported 00:07:33.148 SGL Bit Bucket Descriptor: Not Supported 00:07:33.148 SGL Metadata Pointer: Not Supported 00:07:33.148 Oversized SGL: Not Supported 00:07:33.148 SGL Metadata Address: Not Supported 00:07:33.148 SGL Offset: Not Supported 00:07:33.148 Transport SGL Data Block: Not Supported 00:07:33.148 Replay Protected Memory Block: Not Supported 00:07:33.148 00:07:33.148 Firmware Slot Information 00:07:33.148 ========================= 00:07:33.148 Active slot: 1 00:07:33.148 Slot 1 Firmware Revision: 1.0 00:07:33.148 00:07:33.148 00:07:33.148 Commands Supported and Effects 00:07:33.148 ============================== 00:07:33.148 Admin Commands 00:07:33.148 -------------- 00:07:33.148 Delete I/O Submission Queue (00h): Supported 00:07:33.148 Create I/O Submission Queue (01h): Supported 00:07:33.148 Get Log Page (02h): Supported 00:07:33.148 Delete I/O Completion Queue (04h): Supported 00:07:33.148 Create I/O Completion Queue (05h): Supported 00:07:33.148 Identify (06h): Supported 00:07:33.148 Abort (08h): Supported 00:07:33.148 Set Features (09h): Supported 00:07:33.148 Get Features (0Ah): Supported 00:07:33.148 Asynchronous Event Request (0Ch): Supported 00:07:33.148 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:33.148 Directive Send (19h): Supported 00:07:33.148 Directive Receive (1Ah): Supported 00:07:33.148 Virtualization Management (1Ch): Supported 00:07:33.148 Doorbell Buffer Config (7Ch): Supported 00:07:33.148 Format NVM (80h): Supported LBA-Change 00:07:33.148 I/O Commands 00:07:33.148 ------------ 00:07:33.148 Flush (00h): Supported LBA-Change 00:07:33.148 Write (01h): Supported LBA-Change 00:07:33.148 Read (02h): Supported 00:07:33.148 Compare (05h): Supported 00:07:33.148 Write Zeroes (08h): Supported LBA-Change 00:07:33.148 Dataset Management (09h): Supported LBA-Change 00:07:33.148 Unknown (0Ch): Supported 00:07:33.148 Unknown (12h): Supported 00:07:33.148 Copy (19h): Supported LBA-Change 00:07:33.148 Unknown (1Dh): Supported LBA-Change 00:07:33.148 00:07:33.148 Error Log 00:07:33.148 ========= 00:07:33.148 00:07:33.148 Arbitration 00:07:33.148 =========== 00:07:33.148 Arbitration Burst: no limit 00:07:33.148 00:07:33.148 Power Management 00:07:33.148 ================ 00:07:33.148 Number of Power States: 1 00:07:33.148 Current Power State: Power State #0 00:07:33.148 Power State #0: 00:07:33.148 Max Power: 25.00 W 00:07:33.148 Non-Operational State: Operational 00:07:33.148 Entry Latency: 16 microseconds 00:07:33.148 Exit Latency: 4 microseconds 00:07:33.148 Relative Read Throughput: 0 00:07:33.148 Relative Read Latency: 0 00:07:33.148 Relative Write Throughput: 0 00:07:33.148 Relative Write Latency: 0 00:07:33.148 Idle Power: Not Reported 00:07:33.148 Active Power: Not Reported 00:07:33.148 Non-Operational Permissive Mode: Not Supported 00:07:33.148 00:07:33.148 Health Information 00:07:33.148 ================== 00:07:33.148 Critical Warnings: 00:07:33.148 Available Spare Space: OK 00:07:33.148 Temperature: OK 00:07:33.148 Device Reliability: OK 00:07:33.148 Read Only: No 00:07:33.148 Volatile Memory Backup: OK 00:07:33.148 Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.148 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:33.148 Available Spare: 0% 00:07:33.148 Available Spare Threshold: 0% 00:07:33.148 Life Percentage Used: 0% 00:07:33.148 Data Units Read: 2422 00:07:33.148 Data Units Written: 2209 00:07:33.148 Host Read Commands: 129679 00:07:33.148 Host Write Commands: 127948 00:07:33.148 Controller Busy Time: 0 minutes 00:07:33.148 Power Cycles: 0 00:07:33.148 Power On Hours: 0 hours 00:07:33.148 Unsafe Shutdowns: 0 00:07:33.148 Unrecoverable Media Errors: 0 00:07:33.148 Lifetime Error Log Entries: 0 00:07:33.148 Warning Temperature Time: 0 minutes 00:07:33.148 Critical Temperature Time: 0 minutes 00:07:33.148 00:07:33.148 Number of Queues 00:07:33.148 ================ 00:07:33.148 Number of I/O Submission Queues: 64 00:07:33.148 Number of I/O Completion Queues: 64 00:07:33.148 00:07:33.148 ZNS Specific Controller Data 00:07:33.148 ============================ 00:07:33.148 Zone Append Size Limit: 0 00:07:33.148 00:07:33.148 00:07:33.148 Active Namespaces 00:07:33.148 ================= 00:07:33.148 Namespace ID:1 00:07:33.148 Error Recovery Timeout: Unlimited 00:07:33.148 Command Set Identifier: NVM (00h) 00:07:33.148 Deallocate: Supported 00:07:33.148 Deallocated/Unwritten Error: Supported 00:07:33.148 Deallocated Read Value: All 0x00 00:07:33.148 Deallocate in Write Zeroes: Not Supported 00:07:33.148 Deallocated Guard Field: 0xFFFF 00:07:33.148 Flush: Supported 00:07:33.148 Reservation: Not Supported 00:07:33.148 Namespace Sharing Capabilities: Private 00:07:33.148 Size (in LBAs): 1048576 (4GiB) 00:07:33.148 Capacity (in LBAs): 1048576 (4GiB) 00:07:33.148 Utilization (in LBAs): 1048576 (4GiB) 00:07:33.148 Thin Provisioning: Not Supported 00:07:33.148 Per-NS Atomic Units: No 00:07:33.148 Maximum Single Source Range Length: 128 00:07:33.148 Maximum Copy Length: 128 00:07:33.148 Maximum Source Range Count: 128 00:07:33.148 NGUID/EUI64 Never Reused: No 00:07:33.148 Namespace Write Protected: No 00:07:33.148 Number of LBA Formats: 8 00:07:33.148 Current LBA Format: LBA Format #04 00:07:33.148 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.148 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.148 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.148 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.148 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.148 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.148 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.148 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.148 00:07:33.148 NVM Specific Namespace Data 00:07:33.148 =========================== 00:07:33.148 Logical Block Storage Tag Mask: 0 00:07:33.148 Protection Information Capabilities: 00:07:33.148 16b Guard Protection Information Storage Tag Support: No 00:07:33.148 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.148 Storage Tag Check Read Support: No 00:07:33.148 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.148 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.148 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.148 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.148 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.148 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.148 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.148 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.148 Namespace ID:2 00:07:33.148 Error Recovery Timeout: Unlimited 00:07:33.148 Command Set Identifier: NVM (00h) 00:07:33.148 Deallocate: Supported 00:07:33.148 Deallocated/Unwritten Error: Supported 00:07:33.148 Deallocated Read Value: All 0x00 00:07:33.148 Deallocate in Write Zeroes: Not Supported 00:07:33.148 Deallocated Guard Field: 0xFFFF 00:07:33.148 Flush: Supported 00:07:33.148 Reservation: Not Supported 00:07:33.148 Namespace Sharing Capabilities: Private 00:07:33.148 Size (in LBAs): 1048576 (4GiB) 00:07:33.148 Capacity (in LBAs): 1048576 (4GiB) 00:07:33.148 Utilization (in LBAs): 1048576 (4GiB) 00:07:33.148 Thin Provisioning: Not Supported 00:07:33.148 Per-NS Atomic Units: No 00:07:33.148 Maximum Single Source Range Length: 128 00:07:33.148 Maximum Copy Length: 128 00:07:33.148 Maximum Source Range Count: 128 00:07:33.148 NGUID/EUI64 Never Reused: No 00:07:33.148 Namespace Write Protected: No 00:07:33.149 Number of LBA Formats: 8 00:07:33.149 Current LBA Format: LBA Format #04 00:07:33.149 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.149 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.149 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.149 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.149 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.149 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.149 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.149 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.149 00:07:33.149 NVM Specific Namespace Data 00:07:33.149 =========================== 00:07:33.149 Logical Block Storage Tag Mask: 0 00:07:33.149 Protection Information Capabilities: 00:07:33.149 16b Guard Protection Information Storage Tag Support: No 00:07:33.149 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.149 Storage Tag Check Read Support: No 00:07:33.149 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Namespace ID:3 00:07:33.149 Error Recovery Timeout: Unlimited 00:07:33.149 Command Set Identifier: NVM (00h) 00:07:33.149 Deallocate: Supported 00:07:33.149 Deallocated/Unwritten Error: Supported 00:07:33.149 Deallocated Read Value: All 0x00 00:07:33.149 Deallocate in Write Zeroes: Not Supported 00:07:33.149 Deallocated Guard Field: 0xFFFF 00:07:33.149 Flush: Supported 00:07:33.149 Reservation: Not Supported 00:07:33.149 Namespace Sharing Capabilities: Private 00:07:33.149 Size (in LBAs): 1048576 (4GiB) 00:07:33.149 Capacity (in LBAs): 1048576 (4GiB) 00:07:33.149 Utilization (in LBAs): 1048576 (4GiB) 00:07:33.149 Thin Provisioning: Not Supported 00:07:33.149 Per-NS Atomic Units: No 00:07:33.149 Maximum Single Source Range Length: 128 00:07:33.149 Maximum Copy Length: 128 00:07:33.149 Maximum Source Range Count: 128 00:07:33.149 NGUID/EUI64 Never Reused: No 00:07:33.149 Namespace Write Protected: No 00:07:33.149 Number of LBA Formats: 8 00:07:33.149 Current LBA Format: LBA Format #04 00:07:33.149 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.149 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.149 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.149 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.149 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.149 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.149 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.149 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.149 00:07:33.149 NVM Specific Namespace Data 00:07:33.149 =========================== 00:07:33.149 Logical Block Storage Tag Mask: 0 00:07:33.149 Protection Information Capabilities: 00:07:33.149 16b Guard Protection Information Storage Tag Support: No 00:07:33.149 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.149 Storage Tag Check Read Support: No 00:07:33.149 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.149 09:13:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:33.149 09:13:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:33.407 ===================================================== 00:07:33.407 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:33.407 ===================================================== 00:07:33.407 Controller Capabilities/Features 00:07:33.407 ================================ 00:07:33.407 Vendor ID: 1b36 00:07:33.407 Subsystem Vendor ID: 1af4 00:07:33.407 Serial Number: 12343 00:07:33.407 Model Number: QEMU NVMe Ctrl 00:07:33.407 Firmware Version: 8.0.0 00:07:33.407 Recommended Arb Burst: 6 00:07:33.407 IEEE OUI Identifier: 00 54 52 00:07:33.407 Multi-path I/O 00:07:33.407 May have multiple subsystem ports: No 00:07:33.407 May have multiple controllers: Yes 00:07:33.407 Associated with SR-IOV VF: No 00:07:33.407 Max Data Transfer Size: 524288 00:07:33.407 Max Number of Namespaces: 256 00:07:33.407 Max Number of I/O Queues: 64 00:07:33.407 NVMe Specification Version (VS): 1.4 00:07:33.407 NVMe Specification Version (Identify): 1.4 00:07:33.407 Maximum Queue Entries: 2048 00:07:33.407 Contiguous Queues Required: Yes 00:07:33.407 Arbitration Mechanisms Supported 00:07:33.407 Weighted Round Robin: Not Supported 00:07:33.407 Vendor Specific: Not Supported 00:07:33.407 Reset Timeout: 7500 ms 00:07:33.407 Doorbell Stride: 4 bytes 00:07:33.407 NVM Subsystem Reset: Not Supported 00:07:33.407 Command Sets Supported 00:07:33.407 NVM Command Set: Supported 00:07:33.407 Boot Partition: Not Supported 00:07:33.407 Memory Page Size Minimum: 4096 bytes 00:07:33.407 Memory Page Size Maximum: 65536 bytes 00:07:33.407 Persistent Memory Region: Not Supported 00:07:33.407 Optional Asynchronous Events Supported 00:07:33.407 Namespace Attribute Notices: Supported 00:07:33.407 Firmware Activation Notices: Not Supported 00:07:33.407 ANA Change Notices: Not Supported 00:07:33.407 PLE Aggregate Log Change Notices: Not Supported 00:07:33.407 LBA Status Info Alert Notices: Not Supported 00:07:33.407 EGE Aggregate Log Change Notices: Not Supported 00:07:33.407 Normal NVM Subsystem Shutdown event: Not Supported 00:07:33.407 Zone Descriptor Change Notices: Not Supported 00:07:33.407 Discovery Log Change Notices: Not Supported 00:07:33.407 Controller Attributes 00:07:33.407 128-bit Host Identifier: Not Supported 00:07:33.407 Non-Operational Permissive Mode: Not Supported 00:07:33.407 NVM Sets: Not Supported 00:07:33.407 Read Recovery Levels: Not Supported 00:07:33.407 Endurance Groups: Supported 00:07:33.407 Predictable Latency Mode: Not Supported 00:07:33.407 Traffic Based Keep ALive: Not Supported 00:07:33.407 Namespace Granularity: Not Supported 00:07:33.407 SQ Associations: Not Supported 00:07:33.407 UUID List: Not Supported 00:07:33.407 Multi-Domain Subsystem: Not Supported 00:07:33.407 Fixed Capacity Management: Not Supported 00:07:33.407 Variable Capacity Management: Not Supported 00:07:33.407 Delete Endurance Group: Not Supported 00:07:33.407 Delete NVM Set: Not Supported 00:07:33.408 Extended LBA Formats Supported: Supported 00:07:33.408 Flexible Data Placement Supported: Supported 00:07:33.408 00:07:33.408 Controller Memory Buffer Support 00:07:33.408 ================================ 00:07:33.408 Supported: No 00:07:33.408 00:07:33.408 Persistent Memory Region Support 00:07:33.408 ================================ 00:07:33.408 Supported: No 00:07:33.408 00:07:33.408 Admin Command Set Attributes 00:07:33.408 ============================ 00:07:33.408 Security Send/Receive: Not Supported 00:07:33.408 Format NVM: Supported 00:07:33.408 Firmware Activate/Download: Not Supported 00:07:33.408 Namespace Management: Supported 00:07:33.408 Device Self-Test: Not Supported 00:07:33.408 Directives: Supported 00:07:33.408 NVMe-MI: Not Supported 00:07:33.408 Virtualization Management: Not Supported 00:07:33.408 Doorbell Buffer Config: Supported 00:07:33.408 Get LBA Status Capability: Not Supported 00:07:33.408 Command & Feature Lockdown Capability: Not Supported 00:07:33.408 Abort Command Limit: 4 00:07:33.408 Async Event Request Limit: 4 00:07:33.408 Number of Firmware Slots: N/A 00:07:33.408 Firmware Slot 1 Read-Only: N/A 00:07:33.408 Firmware Activation Without Reset: N/A 00:07:33.408 Multiple Update Detection Support: N/A 00:07:33.408 Firmware Update Granularity: No Information Provided 00:07:33.408 Per-Namespace SMART Log: Yes 00:07:33.408 Asymmetric Namespace Access Log Page: Not Supported 00:07:33.408 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:33.408 Command Effects Log Page: Supported 00:07:33.408 Get Log Page Extended Data: Supported 00:07:33.408 Telemetry Log Pages: Not Supported 00:07:33.408 Persistent Event Log Pages: Not Supported 00:07:33.408 Supported Log Pages Log Page: May Support 00:07:33.408 Commands Supported & Effects Log Page: Not Supported 00:07:33.408 Feature Identifiers & Effects Log Page:May Support 00:07:33.408 NVMe-MI Commands & Effects Log Page: May Support 00:07:33.408 Data Area 4 for Telemetry Log: Not Supported 00:07:33.408 Error Log Page Entries Supported: 1 00:07:33.408 Keep Alive: Not Supported 00:07:33.408 00:07:33.408 NVM Command Set Attributes 00:07:33.408 ========================== 00:07:33.408 Submission Queue Entry Size 00:07:33.408 Max: 64 00:07:33.408 Min: 64 00:07:33.408 Completion Queue Entry Size 00:07:33.408 Max: 16 00:07:33.408 Min: 16 00:07:33.408 Number of Namespaces: 256 00:07:33.408 Compare Command: Supported 00:07:33.408 Write Uncorrectable Command: Not Supported 00:07:33.408 Dataset Management Command: Supported 00:07:33.408 Write Zeroes Command: Supported 00:07:33.408 Set Features Save Field: Supported 00:07:33.408 Reservations: Not Supported 00:07:33.408 Timestamp: Supported 00:07:33.408 Copy: Supported 00:07:33.408 Volatile Write Cache: Present 00:07:33.408 Atomic Write Unit (Normal): 1 00:07:33.408 Atomic Write Unit (PFail): 1 00:07:33.408 Atomic Compare & Write Unit: 1 00:07:33.408 Fused Compare & Write: Not Supported 00:07:33.408 Scatter-Gather List 00:07:33.408 SGL Command Set: Supported 00:07:33.408 SGL Keyed: Not Supported 00:07:33.408 SGL Bit Bucket Descriptor: Not Supported 00:07:33.408 SGL Metadata Pointer: Not Supported 00:07:33.408 Oversized SGL: Not Supported 00:07:33.408 SGL Metadata Address: Not Supported 00:07:33.408 SGL Offset: Not Supported 00:07:33.408 Transport SGL Data Block: Not Supported 00:07:33.408 Replay Protected Memory Block: Not Supported 00:07:33.408 00:07:33.408 Firmware Slot Information 00:07:33.408 ========================= 00:07:33.408 Active slot: 1 00:07:33.408 Slot 1 Firmware Revision: 1.0 00:07:33.408 00:07:33.408 00:07:33.408 Commands Supported and Effects 00:07:33.408 ============================== 00:07:33.408 Admin Commands 00:07:33.408 -------------- 00:07:33.408 Delete I/O Submission Queue (00h): Supported 00:07:33.408 Create I/O Submission Queue (01h): Supported 00:07:33.408 Get Log Page (02h): Supported 00:07:33.408 Delete I/O Completion Queue (04h): Supported 00:07:33.408 Create I/O Completion Queue (05h): Supported 00:07:33.408 Identify (06h): Supported 00:07:33.408 Abort (08h): Supported 00:07:33.408 Set Features (09h): Supported 00:07:33.408 Get Features (0Ah): Supported 00:07:33.408 Asynchronous Event Request (0Ch): Supported 00:07:33.408 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:33.408 Directive Send (19h): Supported 00:07:33.408 Directive Receive (1Ah): Supported 00:07:33.408 Virtualization Management (1Ch): Supported 00:07:33.408 Doorbell Buffer Config (7Ch): Supported 00:07:33.408 Format NVM (80h): Supported LBA-Change 00:07:33.408 I/O Commands 00:07:33.408 ------------ 00:07:33.408 Flush (00h): Supported LBA-Change 00:07:33.408 Write (01h): Supported LBA-Change 00:07:33.408 Read (02h): Supported 00:07:33.408 Compare (05h): Supported 00:07:33.408 Write Zeroes (08h): Supported LBA-Change 00:07:33.408 Dataset Management (09h): Supported LBA-Change 00:07:33.408 Unknown (0Ch): Supported 00:07:33.408 Unknown (12h): Supported 00:07:33.408 Copy (19h): Supported LBA-Change 00:07:33.408 Unknown (1Dh): Supported LBA-Change 00:07:33.408 00:07:33.408 Error Log 00:07:33.408 ========= 00:07:33.408 00:07:33.408 Arbitration 00:07:33.408 =========== 00:07:33.408 Arbitration Burst: no limit 00:07:33.408 00:07:33.408 Power Management 00:07:33.408 ================ 00:07:33.408 Number of Power States: 1 00:07:33.408 Current Power State: Power State #0 00:07:33.408 Power State #0: 00:07:33.408 Max Power: 25.00 W 00:07:33.408 Non-Operational State: Operational 00:07:33.408 Entry Latency: 16 microseconds 00:07:33.408 Exit Latency: 4 microseconds 00:07:33.408 Relative Read Throughput: 0 00:07:33.408 Relative Read Latency: 0 00:07:33.408 Relative Write Throughput: 0 00:07:33.408 Relative Write Latency: 0 00:07:33.408 Idle Power: Not Reported 00:07:33.408 Active Power: Not Reported 00:07:33.408 Non-Operational Permissive Mode: Not Supported 00:07:33.408 00:07:33.408 Health Information 00:07:33.408 ================== 00:07:33.408 Critical Warnings: 00:07:33.408 Available Spare Space: OK 00:07:33.408 Temperature: OK 00:07:33.408 Device Reliability: OK 00:07:33.408 Read Only: No 00:07:33.408 Volatile Memory Backup: OK 00:07:33.408 Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.408 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:33.408 Available Spare: 0% 00:07:33.408 Available Spare Threshold: 0% 00:07:33.408 Life Percentage Used: 0% 00:07:33.408 Data Units Read: 901 00:07:33.408 Data Units Written: 830 00:07:33.408 Host Read Commands: 44017 00:07:33.408 Host Write Commands: 43440 00:07:33.408 Controller Busy Time: 0 minutes 00:07:33.408 Power Cycles: 0 00:07:33.408 Power On Hours: 0 hours 00:07:33.408 Unsafe Shutdowns: 0 00:07:33.408 Unrecoverable Media Errors: 0 00:07:33.408 Lifetime Error Log Entries: 0 00:07:33.408 Warning Temperature Time: 0 minutes 00:07:33.408 Critical Temperature Time: 0 minutes 00:07:33.408 00:07:33.408 Number of Queues 00:07:33.408 ================ 00:07:33.408 Number of I/O Submission Queues: 64 00:07:33.408 Number of I/O Completion Queues: 64 00:07:33.408 00:07:33.408 ZNS Specific Controller Data 00:07:33.408 ============================ 00:07:33.408 Zone Append Size Limit: 0 00:07:33.408 00:07:33.408 00:07:33.408 Active Namespaces 00:07:33.408 ================= 00:07:33.408 Namespace ID:1 00:07:33.408 Error Recovery Timeout: Unlimited 00:07:33.408 Command Set Identifier: NVM (00h) 00:07:33.408 Deallocate: Supported 00:07:33.408 Deallocated/Unwritten Error: Supported 00:07:33.408 Deallocated Read Value: All 0x00 00:07:33.408 Deallocate in Write Zeroes: Not Supported 00:07:33.408 Deallocated Guard Field: 0xFFFF 00:07:33.408 Flush: Supported 00:07:33.408 Reservation: Not Supported 00:07:33.408 Namespace Sharing Capabilities: Multiple Controllers 00:07:33.408 Size (in LBAs): 262144 (1GiB) 00:07:33.408 Capacity (in LBAs): 262144 (1GiB) 00:07:33.408 Utilization (in LBAs): 262144 (1GiB) 00:07:33.408 Thin Provisioning: Not Supported 00:07:33.408 Per-NS Atomic Units: No 00:07:33.408 Maximum Single Source Range Length: 128 00:07:33.408 Maximum Copy Length: 128 00:07:33.408 Maximum Source Range Count: 128 00:07:33.408 NGUID/EUI64 Never Reused: No 00:07:33.408 Namespace Write Protected: No 00:07:33.408 Endurance group ID: 1 00:07:33.408 Number of LBA Formats: 8 00:07:33.408 Current LBA Format: LBA Format #04 00:07:33.408 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:33.408 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:33.408 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:33.408 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:33.408 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:33.408 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:33.408 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:33.408 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:33.408 00:07:33.408 Get Feature FDP: 00:07:33.408 ================ 00:07:33.408 Enabled: Yes 00:07:33.408 FDP configuration index: 0 00:07:33.408 00:07:33.409 FDP configurations log page 00:07:33.409 =========================== 00:07:33.409 Number of FDP configurations: 1 00:07:33.409 Version: 0 00:07:33.409 Size: 112 00:07:33.409 FDP Configuration Descriptor: 0 00:07:33.409 Descriptor Size: 96 00:07:33.409 Reclaim Group Identifier format: 2 00:07:33.409 FDP Volatile Write Cache: Not Present 00:07:33.409 FDP Configuration: Valid 00:07:33.409 Vendor Specific Size: 0 00:07:33.409 Number of Reclaim Groups: 2 00:07:33.409 Number of Recalim Unit Handles: 8 00:07:33.409 Max Placement Identifiers: 128 00:07:33.409 Number of Namespaces Suppprted: 256 00:07:33.409 Reclaim unit Nominal Size: 6000000 bytes 00:07:33.409 Estimated Reclaim Unit Time Limit: Not Reported 00:07:33.409 RUH Desc #000: RUH Type: Initially Isolated 00:07:33.409 RUH Desc #001: RUH Type: Initially Isolated 00:07:33.409 RUH Desc #002: RUH Type: Initially Isolated 00:07:33.409 RUH Desc #003: RUH Type: Initially Isolated 00:07:33.409 RUH Desc #004: RUH Type: Initially Isolated 00:07:33.409 RUH Desc #005: RUH Type: Initially Isolated 00:07:33.409 RUH Desc #006: RUH Type: Initially Isolated 00:07:33.409 RUH Desc #007: RUH Type: Initially Isolated 00:07:33.409 00:07:33.409 FDP reclaim unit handle usage log page 00:07:33.409 ====================================== 00:07:33.409 Number of Reclaim Unit Handles: 8 00:07:33.409 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:33.409 RUH Usage Desc #001: RUH Attributes: Unused 00:07:33.409 RUH Usage Desc #002: RUH Attributes: Unused 00:07:33.409 RUH Usage Desc #003: RUH Attributes: Unused 00:07:33.409 RUH Usage Desc #004: RUH Attributes: Unused 00:07:33.409 RUH Usage Desc #005: RUH Attributes: Unused 00:07:33.409 RUH Usage Desc #006: RUH Attributes: Unused 00:07:33.409 RUH Usage Desc #007: RUH Attributes: Unused 00:07:33.409 00:07:33.409 FDP statistics log page 00:07:33.409 ======================= 00:07:33.409 Host bytes with metadata written: 522035200 00:07:33.409 Media bytes with metadata written: 522092544 00:07:33.409 Media bytes erased: 0 00:07:33.409 00:07:33.409 FDP events log page 00:07:33.409 =================== 00:07:33.409 Number of FDP events: 0 00:07:33.409 00:07:33.409 NVM Specific Namespace Data 00:07:33.409 =========================== 00:07:33.409 Logical Block Storage Tag Mask: 0 00:07:33.409 Protection Information Capabilities: 00:07:33.409 16b Guard Protection Information Storage Tag Support: No 00:07:33.409 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:33.409 Storage Tag Check Read Support: No 00:07:33.409 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.409 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.409 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.409 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.409 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.409 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.409 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.409 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:33.409 00:07:33.409 real 0m1.117s 00:07:33.409 user 0m0.377s 00:07:33.409 sys 0m0.522s 00:07:33.409 09:13:25 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.409 09:13:25 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:33.409 ************************************ 00:07:33.409 END TEST nvme_identify 00:07:33.409 ************************************ 00:07:33.409 09:13:25 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:33.409 09:13:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.409 09:13:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.409 09:13:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.409 ************************************ 00:07:33.409 START TEST nvme_perf 00:07:33.409 ************************************ 00:07:33.409 09:13:25 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:33.409 09:13:25 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:34.787 Initializing NVMe Controllers 00:07:34.787 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:34.787 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:34.787 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:34.787 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:34.787 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:34.787 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:34.787 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:34.787 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:34.787 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:34.787 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:34.787 Initialization complete. Launching workers. 00:07:34.787 ======================================================== 00:07:34.787 Latency(us) 00:07:34.787 Device Information : IOPS MiB/s Average min max 00:07:34.787 PCIE (0000:00:10.0) NSID 1 from core 0: 17517.55 205.28 7331.02 5122.96 29889.83 00:07:34.787 PCIE (0000:00:11.0) NSID 1 from core 0: 17517.55 205.28 7324.49 5160.47 28173.65 00:07:34.787 PCIE (0000:00:13.0) NSID 1 from core 0: 17517.55 205.28 7316.46 5191.62 26788.96 00:07:34.787 PCIE (0000:00:12.0) NSID 1 from core 0: 17517.55 205.28 7308.29 5169.57 24968.66 00:07:34.787 PCIE (0000:00:12.0) NSID 2 from core 0: 17517.55 205.28 7299.21 5152.23 23176.84 00:07:34.787 PCIE (0000:00:12.0) NSID 3 from core 0: 17581.49 206.03 7261.39 5165.85 18198.23 00:07:34.787 ======================================================== 00:07:34.787 Total : 105169.26 1232.45 7306.78 5122.96 29889.83 00:07:34.787 00:07:34.787 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:34.787 ================================================================================= 00:07:34.787 1.00000% : 5419.323us 00:07:34.787 10.00000% : 5772.209us 00:07:34.787 25.00000% : 6074.683us 00:07:34.787 50.00000% : 6503.188us 00:07:34.787 75.00000% : 7561.846us 00:07:34.787 90.00000% : 10132.874us 00:07:34.787 95.00000% : 11695.655us 00:07:34.787 98.00000% : 13510.498us 00:07:34.787 99.00000% : 15728.640us 00:07:34.787 99.50000% : 25004.505us 00:07:34.787 99.90000% : 29642.437us 00:07:34.787 99.99000% : 30045.735us 00:07:34.787 99.99900% : 30045.735us 00:07:34.787 99.99990% : 30045.735us 00:07:34.787 99.99999% : 30045.735us 00:07:34.787 00:07:34.787 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:34.787 ================================================================================= 00:07:34.787 1.00000% : 5419.323us 00:07:34.787 10.00000% : 5822.622us 00:07:34.787 25.00000% : 6074.683us 00:07:34.787 50.00000% : 6452.775us 00:07:34.787 75.00000% : 7662.671us 00:07:34.787 90.00000% : 10132.874us 00:07:34.787 95.00000% : 11544.418us 00:07:34.787 98.00000% : 13208.025us 00:07:34.787 99.00000% : 16333.588us 00:07:34.787 99.50000% : 23290.486us 00:07:34.787 99.90000% : 27827.594us 00:07:34.787 99.99000% : 28230.892us 00:07:34.787 99.99900% : 28230.892us 00:07:34.787 99.99990% : 28230.892us 00:07:34.787 99.99999% : 28230.892us 00:07:34.787 00:07:34.787 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:34.788 ================================================================================= 00:07:34.788 1.00000% : 5444.529us 00:07:34.788 10.00000% : 5822.622us 00:07:34.788 25.00000% : 6074.683us 00:07:34.788 50.00000% : 6452.775us 00:07:34.788 75.00000% : 7662.671us 00:07:34.788 90.00000% : 10132.874us 00:07:34.788 95.00000% : 11594.831us 00:07:34.788 98.00000% : 13611.323us 00:07:34.788 99.00000% : 16131.938us 00:07:34.788 99.50000% : 21979.766us 00:07:34.788 99.90000% : 26416.049us 00:07:34.788 99.99000% : 26819.348us 00:07:34.788 99.99900% : 26819.348us 00:07:34.788 99.99990% : 26819.348us 00:07:34.788 99.99999% : 26819.348us 00:07:34.788 00:07:34.788 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:34.788 ================================================================================= 00:07:34.788 1.00000% : 5444.529us 00:07:34.788 10.00000% : 5822.622us 00:07:34.788 25.00000% : 6074.683us 00:07:34.788 50.00000% : 6427.569us 00:07:34.788 75.00000% : 7662.671us 00:07:34.788 90.00000% : 10183.286us 00:07:34.788 95.00000% : 11645.243us 00:07:34.788 98.00000% : 13812.972us 00:07:34.788 99.00000% : 16232.763us 00:07:34.788 99.50000% : 20164.923us 00:07:34.788 99.90000% : 24601.206us 00:07:34.788 99.99000% : 25004.505us 00:07:34.788 99.99900% : 25004.505us 00:07:34.788 99.99990% : 25004.505us 00:07:34.788 99.99999% : 25004.505us 00:07:34.788 00:07:34.788 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:34.788 ================================================================================= 00:07:34.788 1.00000% : 5444.529us 00:07:34.788 10.00000% : 5822.622us 00:07:34.788 25.00000% : 6074.683us 00:07:34.788 50.00000% : 6427.569us 00:07:34.788 75.00000% : 7662.671us 00:07:34.788 90.00000% : 10183.286us 00:07:34.788 95.00000% : 11695.655us 00:07:34.788 98.00000% : 14417.920us 00:07:34.788 99.00000% : 15627.815us 00:07:34.788 99.50000% : 18350.080us 00:07:34.788 99.90000% : 22786.363us 00:07:34.788 99.99000% : 23189.662us 00:07:34.788 99.99900% : 23189.662us 00:07:34.788 99.99990% : 23189.662us 00:07:34.788 99.99999% : 23189.662us 00:07:34.788 00:07:34.788 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:34.788 ================================================================================= 00:07:34.788 1.00000% : 5444.529us 00:07:34.788 10.00000% : 5822.622us 00:07:34.788 25.00000% : 6074.683us 00:07:34.788 50.00000% : 6452.775us 00:07:34.788 75.00000% : 7612.258us 00:07:34.788 90.00000% : 10132.874us 00:07:34.788 95.00000% : 11746.068us 00:07:34.788 98.00000% : 13510.498us 00:07:34.788 99.00000% : 15224.517us 00:07:34.788 99.50000% : 15728.640us 00:07:34.788 99.90000% : 17845.957us 00:07:34.788 99.99000% : 18249.255us 00:07:34.788 99.99900% : 18249.255us 00:07:34.788 99.99990% : 18249.255us 00:07:34.788 99.99999% : 18249.255us 00:07:34.788 00:07:34.788 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:34.788 ============================================================================== 00:07:34.788 Range in us Cumulative IO count 00:07:34.788 5116.849 - 5142.055: 0.0342% ( 6) 00:07:34.788 5142.055 - 5167.262: 0.0969% ( 11) 00:07:34.788 5167.262 - 5192.468: 0.1483% ( 9) 00:07:34.788 5192.468 - 5217.674: 0.2110% ( 11) 00:07:34.788 5217.674 - 5242.880: 0.3022% ( 16) 00:07:34.788 5242.880 - 5268.086: 0.3992% ( 17) 00:07:34.788 5268.086 - 5293.292: 0.4961% ( 17) 00:07:34.788 5293.292 - 5318.498: 0.6045% ( 19) 00:07:34.788 5318.498 - 5343.705: 0.7413% ( 24) 00:07:34.788 5343.705 - 5368.911: 0.8611% ( 21) 00:07:34.788 5368.911 - 5394.117: 0.9580% ( 17) 00:07:34.788 5394.117 - 5419.323: 1.0664% ( 19) 00:07:34.788 5419.323 - 5444.529: 1.1690% ( 18) 00:07:34.788 5444.529 - 5469.735: 1.2888% ( 21) 00:07:34.788 5469.735 - 5494.942: 1.4313% ( 25) 00:07:34.788 5494.942 - 5520.148: 1.6651% ( 41) 00:07:34.788 5520.148 - 5545.354: 2.0586% ( 69) 00:07:34.788 5545.354 - 5570.560: 2.6289% ( 100) 00:07:34.788 5570.560 - 5595.766: 3.3474% ( 126) 00:07:34.788 5595.766 - 5620.972: 4.0887% ( 130) 00:07:34.788 5620.972 - 5646.178: 4.9441% ( 150) 00:07:34.788 5646.178 - 5671.385: 5.8166% ( 153) 00:07:34.788 5671.385 - 5696.591: 6.7746% ( 168) 00:07:34.788 5696.591 - 5721.797: 7.7840% ( 177) 00:07:34.788 5721.797 - 5747.003: 8.9587% ( 206) 00:07:34.788 5747.003 - 5772.209: 10.0479% ( 191) 00:07:34.788 5772.209 - 5797.415: 11.1314% ( 190) 00:07:34.788 5797.415 - 5822.622: 12.2206% ( 191) 00:07:34.788 5822.622 - 5847.828: 13.4865% ( 222) 00:07:34.788 5847.828 - 5873.034: 14.7696% ( 225) 00:07:34.788 5873.034 - 5898.240: 16.1667% ( 245) 00:07:34.788 5898.240 - 5923.446: 17.4726% ( 229) 00:07:34.788 5923.446 - 5948.652: 18.8755% ( 246) 00:07:34.788 5948.652 - 5973.858: 20.4037% ( 268) 00:07:34.788 5973.858 - 5999.065: 21.8009% ( 245) 00:07:34.788 5999.065 - 6024.271: 23.4090% ( 282) 00:07:34.788 6024.271 - 6049.477: 24.7947% ( 243) 00:07:34.788 6049.477 - 6074.683: 26.1804% ( 243) 00:07:34.788 6074.683 - 6099.889: 27.6688% ( 261) 00:07:34.788 6099.889 - 6125.095: 29.1229% ( 255) 00:07:34.788 6125.095 - 6150.302: 30.5600% ( 252) 00:07:34.788 6150.302 - 6175.508: 32.0826% ( 267) 00:07:34.788 6175.508 - 6200.714: 33.5995% ( 266) 00:07:34.788 6200.714 - 6225.920: 35.0137% ( 248) 00:07:34.788 6225.920 - 6251.126: 36.5648% ( 272) 00:07:34.788 6251.126 - 6276.332: 38.0132% ( 254) 00:07:34.788 6276.332 - 6301.538: 39.6099% ( 280) 00:07:34.788 6301.538 - 6326.745: 40.9957% ( 243) 00:07:34.788 6326.745 - 6351.951: 42.6551% ( 291) 00:07:34.788 6351.951 - 6377.157: 44.1378% ( 260) 00:07:34.788 6377.157 - 6402.363: 45.6718% ( 269) 00:07:34.788 6402.363 - 6427.569: 47.2343% ( 274) 00:07:34.788 6427.569 - 6452.775: 48.8025% ( 275) 00:07:34.788 6452.775 - 6503.188: 51.7678% ( 520) 00:07:34.788 6503.188 - 6553.600: 54.4708% ( 474) 00:07:34.788 6553.600 - 6604.012: 56.8716% ( 421) 00:07:34.788 6604.012 - 6654.425: 59.1184% ( 394) 00:07:34.788 6654.425 - 6704.837: 60.8234% ( 299) 00:07:34.788 6704.837 - 6755.249: 62.3175% ( 262) 00:07:34.788 6755.249 - 6805.662: 63.5949% ( 224) 00:07:34.788 6805.662 - 6856.074: 64.8723% ( 224) 00:07:34.788 6856.074 - 6906.486: 65.8930% ( 179) 00:07:34.788 6906.486 - 6956.898: 66.7997% ( 159) 00:07:34.788 6956.898 - 7007.311: 67.5924% ( 139) 00:07:34.788 7007.311 - 7057.723: 68.3337% ( 130) 00:07:34.788 7057.723 - 7108.135: 69.1036% ( 135) 00:07:34.788 7108.135 - 7158.548: 69.8164% ( 125) 00:07:34.788 7158.548 - 7208.960: 70.5064% ( 121) 00:07:34.788 7208.960 - 7259.372: 71.2249% ( 126) 00:07:34.788 7259.372 - 7309.785: 71.8750% ( 114) 00:07:34.788 7309.785 - 7360.197: 72.5479% ( 118) 00:07:34.788 7360.197 - 7410.609: 73.2436% ( 122) 00:07:34.788 7410.609 - 7461.022: 73.8481% ( 106) 00:07:34.788 7461.022 - 7511.434: 74.4526% ( 106) 00:07:34.788 7511.434 - 7561.846: 75.0000% ( 96) 00:07:34.788 7561.846 - 7612.258: 75.5360% ( 94) 00:07:34.788 7612.258 - 7662.671: 76.0949% ( 98) 00:07:34.788 7662.671 - 7713.083: 76.5682% ( 83) 00:07:34.788 7713.083 - 7763.495: 77.0700% ( 88) 00:07:34.788 7763.495 - 7813.908: 77.5205% ( 79) 00:07:34.788 7813.908 - 7864.320: 77.9197% ( 70) 00:07:34.788 7864.320 - 7914.732: 78.3189% ( 70) 00:07:34.788 7914.732 - 7965.145: 78.6781% ( 63) 00:07:34.788 7965.145 - 8015.557: 79.0317% ( 62) 00:07:34.788 8015.557 - 8065.969: 79.3853% ( 62) 00:07:34.788 8065.969 - 8116.382: 79.7217% ( 59) 00:07:34.788 8116.382 - 8166.794: 80.0297% ( 54) 00:07:34.788 8166.794 - 8217.206: 80.3547% ( 57) 00:07:34.788 8217.206 - 8267.618: 80.6398% ( 50) 00:07:34.788 8267.618 - 8318.031: 80.9478% ( 54) 00:07:34.788 8318.031 - 8368.443: 81.1930% ( 43) 00:07:34.788 8368.443 - 8418.855: 81.4439% ( 44) 00:07:34.788 8418.855 - 8469.268: 81.7404% ( 52) 00:07:34.788 8469.268 - 8519.680: 82.0084% ( 47) 00:07:34.788 8519.680 - 8570.092: 82.2651% ( 45) 00:07:34.788 8570.092 - 8620.505: 82.5046% ( 42) 00:07:34.788 8620.505 - 8670.917: 82.7954% ( 51) 00:07:34.788 8670.917 - 8721.329: 83.0919% ( 52) 00:07:34.788 8721.329 - 8771.742: 83.3371% ( 43) 00:07:34.788 8771.742 - 8822.154: 83.6280% ( 51) 00:07:34.788 8822.154 - 8872.566: 83.8561% ( 40) 00:07:34.788 8872.566 - 8922.978: 84.0899% ( 41) 00:07:34.788 8922.978 - 8973.391: 84.3237% ( 41) 00:07:34.788 8973.391 - 9023.803: 84.5632% ( 42) 00:07:34.788 9023.803 - 9074.215: 84.8141% ( 44) 00:07:34.788 9074.215 - 9124.628: 85.0593% ( 43) 00:07:34.788 9124.628 - 9175.040: 85.3672% ( 54) 00:07:34.788 9175.040 - 9225.452: 85.6752% ( 54) 00:07:34.788 9225.452 - 9275.865: 86.0002% ( 57) 00:07:34.788 9275.865 - 9326.277: 86.3196% ( 56) 00:07:34.788 9326.277 - 9376.689: 86.5591% ( 42) 00:07:34.788 9376.689 - 9427.102: 86.9126% ( 62) 00:07:34.788 9427.102 - 9477.514: 87.1693% ( 45) 00:07:34.788 9477.514 - 9527.926: 87.4601% ( 51) 00:07:34.788 9527.926 - 9578.338: 87.7224% ( 46) 00:07:34.788 9578.338 - 9628.751: 87.9562% ( 41) 00:07:34.788 9628.751 - 9679.163: 88.1729% ( 38) 00:07:34.788 9679.163 - 9729.575: 88.3896% ( 38) 00:07:34.788 9729.575 - 9779.988: 88.6006% ( 37) 00:07:34.788 9779.988 - 9830.400: 88.7945% ( 34) 00:07:34.788 9830.400 - 9880.812: 89.0112% ( 38) 00:07:34.788 9880.812 - 9931.225: 89.2051% ( 34) 00:07:34.788 9931.225 - 9981.637: 89.4731% ( 47) 00:07:34.788 9981.637 - 10032.049: 89.6898% ( 38) 00:07:34.788 10032.049 - 10082.462: 89.9635% ( 48) 00:07:34.788 10082.462 - 10132.874: 90.2144% ( 44) 00:07:34.788 10132.874 - 10183.286: 90.4653% ( 44) 00:07:34.788 10183.286 - 10233.698: 90.6763% ( 37) 00:07:34.788 10233.698 - 10284.111: 90.8702% ( 34) 00:07:34.788 10284.111 - 10334.523: 91.0242% ( 27) 00:07:34.788 10334.523 - 10384.935: 91.2181% ( 34) 00:07:34.788 10384.935 - 10435.348: 91.3891% ( 30) 00:07:34.788 10435.348 - 10485.760: 91.5659% ( 31) 00:07:34.788 10485.760 - 10536.172: 91.7199% ( 27) 00:07:34.788 10536.172 - 10586.585: 91.8739% ( 27) 00:07:34.788 10586.585 - 10636.997: 92.0734% ( 35) 00:07:34.788 10636.997 - 10687.409: 92.2445% ( 30) 00:07:34.788 10687.409 - 10737.822: 92.4042% ( 28) 00:07:34.788 10737.822 - 10788.234: 92.5354% ( 23) 00:07:34.788 10788.234 - 10838.646: 92.6950% ( 28) 00:07:34.789 10838.646 - 10889.058: 92.7863% ( 16) 00:07:34.789 10889.058 - 10939.471: 92.8775% ( 16) 00:07:34.789 10939.471 - 10989.883: 93.0030% ( 22) 00:07:34.789 10989.883 - 11040.295: 93.1455% ( 25) 00:07:34.789 11040.295 - 11090.708: 93.2881% ( 25) 00:07:34.789 11090.708 - 11141.120: 93.4364% ( 26) 00:07:34.789 11141.120 - 11191.532: 93.6302% ( 34) 00:07:34.789 11191.532 - 11241.945: 93.7614% ( 23) 00:07:34.789 11241.945 - 11292.357: 93.9154% ( 27) 00:07:34.789 11292.357 - 11342.769: 94.0807% ( 29) 00:07:34.789 11342.769 - 11393.182: 94.2062% ( 22) 00:07:34.789 11393.182 - 11443.594: 94.3545% ( 26) 00:07:34.789 11443.594 - 11494.006: 94.5141% ( 28) 00:07:34.789 11494.006 - 11544.418: 94.6624% ( 26) 00:07:34.789 11544.418 - 11594.831: 94.8107% ( 26) 00:07:34.789 11594.831 - 11645.243: 94.9875% ( 31) 00:07:34.789 11645.243 - 11695.655: 95.1471% ( 28) 00:07:34.789 11695.655 - 11746.068: 95.3125% ( 29) 00:07:34.789 11746.068 - 11796.480: 95.4551% ( 25) 00:07:34.789 11796.480 - 11846.892: 95.6033% ( 26) 00:07:34.789 11846.892 - 11897.305: 95.7744% ( 30) 00:07:34.789 11897.305 - 11947.717: 95.9398% ( 29) 00:07:34.789 11947.717 - 11998.129: 96.0709% ( 23) 00:07:34.789 11998.129 - 12048.542: 96.1850% ( 20) 00:07:34.789 12048.542 - 12098.954: 96.2933% ( 19) 00:07:34.789 12098.954 - 12149.366: 96.3846% ( 16) 00:07:34.789 12149.366 - 12199.778: 96.4359% ( 9) 00:07:34.789 12199.778 - 12250.191: 96.4758% ( 7) 00:07:34.789 12250.191 - 12300.603: 96.5100% ( 6) 00:07:34.789 12300.603 - 12351.015: 96.5443% ( 6) 00:07:34.789 12351.015 - 12401.428: 96.6184% ( 13) 00:07:34.789 12401.428 - 12451.840: 96.7039% ( 15) 00:07:34.789 12451.840 - 12502.252: 96.7609% ( 10) 00:07:34.789 12502.252 - 12552.665: 96.8351% ( 13) 00:07:34.789 12552.665 - 12603.077: 96.9035% ( 12) 00:07:34.789 12603.077 - 12653.489: 96.9948% ( 16) 00:07:34.789 12653.489 - 12703.902: 97.0860% ( 16) 00:07:34.789 12703.902 - 12754.314: 97.1715% ( 15) 00:07:34.789 12754.314 - 12804.726: 97.2457% ( 13) 00:07:34.789 12804.726 - 12855.138: 97.2970% ( 9) 00:07:34.789 12855.138 - 12905.551: 97.3768% ( 14) 00:07:34.789 12905.551 - 13006.375: 97.5023% ( 22) 00:07:34.789 13006.375 - 13107.200: 97.6277% ( 22) 00:07:34.789 13107.200 - 13208.025: 97.7703% ( 25) 00:07:34.789 13208.025 - 13308.849: 97.8844% ( 20) 00:07:34.789 13308.849 - 13409.674: 97.9984% ( 20) 00:07:34.789 13409.674 - 13510.498: 98.1125% ( 20) 00:07:34.789 13510.498 - 13611.323: 98.1695% ( 10) 00:07:34.789 13611.323 - 13712.148: 98.1752% ( 1) 00:07:34.789 14518.745 - 14619.569: 98.1980% ( 4) 00:07:34.789 14619.569 - 14720.394: 98.2322% ( 6) 00:07:34.789 14720.394 - 14821.218: 98.2607% ( 5) 00:07:34.789 14821.218 - 14922.043: 98.3120% ( 9) 00:07:34.789 14922.043 - 15022.868: 98.3862% ( 13) 00:07:34.789 15022.868 - 15123.692: 98.4660% ( 14) 00:07:34.789 15123.692 - 15224.517: 98.5573% ( 16) 00:07:34.789 15224.517 - 15325.342: 98.6599% ( 18) 00:07:34.789 15325.342 - 15426.166: 98.7511% ( 16) 00:07:34.789 15426.166 - 15526.991: 98.8310% ( 14) 00:07:34.789 15526.991 - 15627.815: 98.9279% ( 17) 00:07:34.789 15627.815 - 15728.640: 99.0078% ( 14) 00:07:34.789 15728.640 - 15829.465: 99.0762% ( 12) 00:07:34.789 15829.465 - 15930.289: 99.1332% ( 10) 00:07:34.789 15930.289 - 16031.114: 99.2016% ( 12) 00:07:34.789 16031.114 - 16131.938: 99.2188% ( 3) 00:07:34.789 16131.938 - 16232.763: 99.2416% ( 4) 00:07:34.789 16232.763 - 16333.588: 99.2530% ( 2) 00:07:34.789 16333.588 - 16434.412: 99.2644% ( 2) 00:07:34.789 16434.412 - 16535.237: 99.2701% ( 1) 00:07:34.789 23794.609 - 23895.434: 99.2758% ( 1) 00:07:34.789 23895.434 - 23996.258: 99.2929% ( 3) 00:07:34.789 23996.258 - 24097.083: 99.3157% ( 4) 00:07:34.789 24097.083 - 24197.908: 99.3385% ( 4) 00:07:34.789 24197.908 - 24298.732: 99.3613% ( 4) 00:07:34.789 24298.732 - 24399.557: 99.3841% ( 4) 00:07:34.789 24399.557 - 24500.382: 99.4012% ( 3) 00:07:34.789 24500.382 - 24601.206: 99.4240% ( 4) 00:07:34.789 24601.206 - 24702.031: 99.4411% ( 3) 00:07:34.789 24702.031 - 24802.855: 99.4583% ( 3) 00:07:34.789 24802.855 - 24903.680: 99.4811% ( 4) 00:07:34.789 24903.680 - 25004.505: 99.5039% ( 4) 00:07:34.789 25004.505 - 25105.329: 99.5267% ( 4) 00:07:34.789 25105.329 - 25206.154: 99.5495% ( 4) 00:07:34.789 25206.154 - 25306.978: 99.5723% ( 4) 00:07:34.789 25306.978 - 25407.803: 99.6008% ( 5) 00:07:34.789 25407.803 - 25508.628: 99.6236% ( 4) 00:07:34.789 25508.628 - 25609.452: 99.6350% ( 2) 00:07:34.789 28230.892 - 28432.542: 99.6693% ( 6) 00:07:34.789 28432.542 - 28634.191: 99.7149% ( 8) 00:07:34.789 28634.191 - 28835.840: 99.7605% ( 8) 00:07:34.789 28835.840 - 29037.489: 99.8061% ( 8) 00:07:34.789 29037.489 - 29239.138: 99.8517% ( 8) 00:07:34.789 29239.138 - 29440.788: 99.8974% ( 8) 00:07:34.789 29440.788 - 29642.437: 99.9430% ( 8) 00:07:34.789 29642.437 - 29844.086: 99.9886% ( 8) 00:07:34.789 29844.086 - 30045.735: 100.0000% ( 2) 00:07:34.789 00:07:34.789 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:34.789 ============================================================================== 00:07:34.789 Range in us Cumulative IO count 00:07:34.789 5142.055 - 5167.262: 0.0285% ( 5) 00:07:34.789 5167.262 - 5192.468: 0.0513% ( 4) 00:07:34.789 5192.468 - 5217.674: 0.0798% ( 5) 00:07:34.789 5217.674 - 5242.880: 0.1198% ( 7) 00:07:34.789 5242.880 - 5268.086: 0.1825% ( 11) 00:07:34.789 5268.086 - 5293.292: 0.2794% ( 17) 00:07:34.789 5293.292 - 5318.498: 0.3593% ( 14) 00:07:34.789 5318.498 - 5343.705: 0.5303% ( 30) 00:07:34.789 5343.705 - 5368.911: 0.7413% ( 37) 00:07:34.789 5368.911 - 5394.117: 0.8896% ( 26) 00:07:34.789 5394.117 - 5419.323: 1.0436% ( 27) 00:07:34.789 5419.323 - 5444.529: 1.1633% ( 21) 00:07:34.789 5444.529 - 5469.735: 1.2546% ( 16) 00:07:34.789 5469.735 - 5494.942: 1.3515% ( 17) 00:07:34.789 5494.942 - 5520.148: 1.4713% ( 21) 00:07:34.789 5520.148 - 5545.354: 1.6423% ( 30) 00:07:34.789 5545.354 - 5570.560: 1.8134% ( 30) 00:07:34.789 5570.560 - 5595.766: 2.0985% ( 50) 00:07:34.789 5595.766 - 5620.972: 2.3723% ( 48) 00:07:34.789 5620.972 - 5646.178: 2.9026% ( 93) 00:07:34.789 5646.178 - 5671.385: 3.5755% ( 118) 00:07:34.789 5671.385 - 5696.591: 4.4822% ( 159) 00:07:34.789 5696.591 - 5721.797: 5.4345% ( 167) 00:07:34.789 5721.797 - 5747.003: 6.5009% ( 187) 00:07:34.789 5747.003 - 5772.209: 7.7726% ( 223) 00:07:34.789 5772.209 - 5797.415: 9.1070% ( 234) 00:07:34.789 5797.415 - 5822.622: 10.6068% ( 263) 00:07:34.789 5822.622 - 5847.828: 12.0324% ( 250) 00:07:34.789 5847.828 - 5873.034: 13.3497% ( 231) 00:07:34.789 5873.034 - 5898.240: 14.8038% ( 255) 00:07:34.789 5898.240 - 5923.446: 16.1211% ( 231) 00:07:34.789 5923.446 - 5948.652: 17.4498% ( 233) 00:07:34.789 5948.652 - 5973.858: 18.8812% ( 251) 00:07:34.789 5973.858 - 5999.065: 20.3581% ( 259) 00:07:34.789 5999.065 - 6024.271: 22.0176% ( 291) 00:07:34.789 6024.271 - 6049.477: 23.7740% ( 308) 00:07:34.789 6049.477 - 6074.683: 25.4790% ( 299) 00:07:34.789 6074.683 - 6099.889: 27.1271% ( 289) 00:07:34.789 6099.889 - 6125.095: 28.7808% ( 290) 00:07:34.789 6125.095 - 6150.302: 30.3832% ( 281) 00:07:34.789 6150.302 - 6175.508: 32.0198% ( 287) 00:07:34.789 6175.508 - 6200.714: 33.6622% ( 288) 00:07:34.789 6200.714 - 6225.920: 35.4015% ( 305) 00:07:34.789 6225.920 - 6251.126: 37.1236% ( 302) 00:07:34.789 6251.126 - 6276.332: 38.8515% ( 303) 00:07:34.789 6276.332 - 6301.538: 40.5794% ( 303) 00:07:34.789 6301.538 - 6326.745: 42.3130% ( 304) 00:07:34.789 6326.745 - 6351.951: 44.0351% ( 302) 00:07:34.789 6351.951 - 6377.157: 45.7687% ( 304) 00:07:34.789 6377.157 - 6402.363: 47.4966% ( 303) 00:07:34.789 6402.363 - 6427.569: 49.1332% ( 287) 00:07:34.789 6427.569 - 6452.775: 50.7584% ( 285) 00:07:34.789 6452.775 - 6503.188: 53.6553% ( 508) 00:07:34.789 6503.188 - 6553.600: 56.0960% ( 428) 00:07:34.789 6553.600 - 6604.012: 58.1375% ( 358) 00:07:34.789 6604.012 - 6654.425: 59.9224% ( 313) 00:07:34.789 6654.425 - 6704.837: 61.4108% ( 261) 00:07:34.789 6704.837 - 6755.249: 62.8193% ( 247) 00:07:34.789 6755.249 - 6805.662: 64.0454% ( 215) 00:07:34.789 6805.662 - 6856.074: 65.0604% ( 178) 00:07:34.789 6856.074 - 6906.486: 65.8873% ( 145) 00:07:34.789 6906.486 - 6956.898: 66.7199% ( 146) 00:07:34.789 6956.898 - 7007.311: 67.4840% ( 134) 00:07:34.789 7007.311 - 7057.723: 68.2140% ( 128) 00:07:34.789 7057.723 - 7108.135: 68.8983% ( 120) 00:07:34.789 7108.135 - 7158.548: 69.5484% ( 114) 00:07:34.789 7158.548 - 7208.960: 70.1642% ( 108) 00:07:34.789 7208.960 - 7259.372: 70.7801% ( 108) 00:07:34.789 7259.372 - 7309.785: 71.2933% ( 90) 00:07:34.789 7309.785 - 7360.197: 71.8693% ( 101) 00:07:34.789 7360.197 - 7410.609: 72.4795% ( 107) 00:07:34.789 7410.609 - 7461.022: 73.0554% ( 101) 00:07:34.789 7461.022 - 7511.434: 73.6542% ( 105) 00:07:34.789 7511.434 - 7561.846: 74.2245% ( 100) 00:07:34.789 7561.846 - 7612.258: 74.8289% ( 106) 00:07:34.789 7612.258 - 7662.671: 75.3479% ( 91) 00:07:34.789 7662.671 - 7713.083: 75.8497% ( 88) 00:07:34.789 7713.083 - 7763.495: 76.4142% ( 99) 00:07:34.789 7763.495 - 7813.908: 76.9788% ( 99) 00:07:34.789 7813.908 - 7864.320: 77.5205% ( 95) 00:07:34.789 7864.320 - 7914.732: 77.9881% ( 82) 00:07:34.789 7914.732 - 7965.145: 78.4215% ( 76) 00:07:34.789 7965.145 - 8015.557: 78.8891% ( 82) 00:07:34.789 8015.557 - 8065.969: 79.3682% ( 84) 00:07:34.789 8065.969 - 8116.382: 79.8244% ( 80) 00:07:34.789 8116.382 - 8166.794: 80.2007% ( 66) 00:07:34.789 8166.794 - 8217.206: 80.4802% ( 49) 00:07:34.789 8217.206 - 8267.618: 80.7596% ( 49) 00:07:34.789 8267.618 - 8318.031: 81.0390% ( 49) 00:07:34.789 8318.031 - 8368.443: 81.2785% ( 42) 00:07:34.789 8368.443 - 8418.855: 81.5123% ( 41) 00:07:34.789 8418.855 - 8469.268: 81.7803% ( 47) 00:07:34.789 8469.268 - 8519.680: 82.0370% ( 45) 00:07:34.789 8519.680 - 8570.092: 82.4133% ( 66) 00:07:34.789 8570.092 - 8620.505: 82.6243% ( 37) 00:07:34.789 8620.505 - 8670.917: 82.9151% ( 51) 00:07:34.789 8670.917 - 8721.329: 83.1318% ( 38) 00:07:34.789 8721.329 - 8771.742: 83.3542% ( 39) 00:07:34.789 8771.742 - 8822.154: 83.6166% ( 46) 00:07:34.789 8822.154 - 8872.566: 83.8732% ( 45) 00:07:34.790 8872.566 - 8922.978: 84.1184% ( 43) 00:07:34.790 8922.978 - 8973.391: 84.3636% ( 43) 00:07:34.790 8973.391 - 9023.803: 84.6373% ( 48) 00:07:34.790 9023.803 - 9074.215: 84.9167% ( 49) 00:07:34.790 9074.215 - 9124.628: 85.2133% ( 52) 00:07:34.790 9124.628 - 9175.040: 85.5098% ( 52) 00:07:34.790 9175.040 - 9225.452: 85.8063% ( 52) 00:07:34.790 9225.452 - 9275.865: 86.0915% ( 50) 00:07:34.790 9275.865 - 9326.277: 86.3538% ( 46) 00:07:34.790 9326.277 - 9376.689: 86.6161% ( 46) 00:07:34.790 9376.689 - 9427.102: 86.8841% ( 47) 00:07:34.790 9427.102 - 9477.514: 87.1635% ( 49) 00:07:34.790 9477.514 - 9527.926: 87.3917% ( 40) 00:07:34.790 9527.926 - 9578.338: 87.5969% ( 36) 00:07:34.790 9578.338 - 9628.751: 87.8422% ( 43) 00:07:34.790 9628.751 - 9679.163: 88.1444% ( 53) 00:07:34.790 9679.163 - 9729.575: 88.3497% ( 36) 00:07:34.790 9729.575 - 9779.988: 88.5835% ( 41) 00:07:34.790 9779.988 - 9830.400: 88.8059% ( 39) 00:07:34.790 9830.400 - 9880.812: 89.0283% ( 39) 00:07:34.790 9880.812 - 9931.225: 89.2735% ( 43) 00:07:34.790 9931.225 - 9981.637: 89.5358% ( 46) 00:07:34.790 9981.637 - 10032.049: 89.7639% ( 40) 00:07:34.790 10032.049 - 10082.462: 89.9635% ( 35) 00:07:34.790 10082.462 - 10132.874: 90.1688% ( 36) 00:07:34.790 10132.874 - 10183.286: 90.3798% ( 37) 00:07:34.790 10183.286 - 10233.698: 90.5737% ( 34) 00:07:34.790 10233.698 - 10284.111: 90.7505% ( 31) 00:07:34.790 10284.111 - 10334.523: 90.9158% ( 29) 00:07:34.790 10334.523 - 10384.935: 91.0812% ( 29) 00:07:34.790 10384.935 - 10435.348: 91.2523% ( 30) 00:07:34.790 10435.348 - 10485.760: 91.4005% ( 26) 00:07:34.790 10485.760 - 10536.172: 91.5944% ( 34) 00:07:34.790 10536.172 - 10586.585: 91.7256% ( 23) 00:07:34.790 10586.585 - 10636.997: 91.8853% ( 28) 00:07:34.790 10636.997 - 10687.409: 92.0506% ( 29) 00:07:34.790 10687.409 - 10737.822: 92.2274% ( 31) 00:07:34.790 10737.822 - 10788.234: 92.4042% ( 31) 00:07:34.790 10788.234 - 10838.646: 92.5924% ( 33) 00:07:34.790 10838.646 - 10889.058: 92.7578% ( 29) 00:07:34.790 10889.058 - 10939.471: 92.9459% ( 33) 00:07:34.790 10939.471 - 10989.883: 93.1284% ( 32) 00:07:34.790 10989.883 - 11040.295: 93.3337% ( 36) 00:07:34.790 11040.295 - 11090.708: 93.5390% ( 36) 00:07:34.790 11090.708 - 11141.120: 93.7443% ( 36) 00:07:34.790 11141.120 - 11191.532: 93.9439% ( 35) 00:07:34.790 11191.532 - 11241.945: 94.1378% ( 34) 00:07:34.790 11241.945 - 11292.357: 94.3716% ( 41) 00:07:34.790 11292.357 - 11342.769: 94.5370% ( 29) 00:07:34.790 11342.769 - 11393.182: 94.6966% ( 28) 00:07:34.790 11393.182 - 11443.594: 94.8392% ( 25) 00:07:34.790 11443.594 - 11494.006: 94.9989% ( 28) 00:07:34.790 11494.006 - 11544.418: 95.1471% ( 26) 00:07:34.790 11544.418 - 11594.831: 95.3011% ( 27) 00:07:34.790 11594.831 - 11645.243: 95.4323% ( 23) 00:07:34.790 11645.243 - 11695.655: 95.5406% ( 19) 00:07:34.790 11695.655 - 11746.068: 95.6432% ( 18) 00:07:34.790 11746.068 - 11796.480: 95.7801% ( 24) 00:07:34.790 11796.480 - 11846.892: 95.9284% ( 26) 00:07:34.790 11846.892 - 11897.305: 96.0139% ( 15) 00:07:34.790 11897.305 - 11947.717: 96.0652% ( 9) 00:07:34.790 11947.717 - 11998.129: 96.1223% ( 10) 00:07:34.790 11998.129 - 12048.542: 96.2135% ( 16) 00:07:34.790 12048.542 - 12098.954: 96.2990% ( 15) 00:07:34.790 12098.954 - 12149.366: 96.3903% ( 16) 00:07:34.790 12149.366 - 12199.778: 96.4815% ( 16) 00:07:34.790 12199.778 - 12250.191: 96.5614% ( 14) 00:07:34.790 12250.191 - 12300.603: 96.6241% ( 11) 00:07:34.790 12300.603 - 12351.015: 96.7096% ( 15) 00:07:34.790 12351.015 - 12401.428: 96.8180% ( 19) 00:07:34.790 12401.428 - 12451.840: 96.9149% ( 17) 00:07:34.790 12451.840 - 12502.252: 96.9948% ( 14) 00:07:34.790 12502.252 - 12552.665: 97.0803% ( 15) 00:07:34.790 12552.665 - 12603.077: 97.1430% ( 11) 00:07:34.790 12603.077 - 12653.489: 97.2229% ( 14) 00:07:34.790 12653.489 - 12703.902: 97.2913% ( 12) 00:07:34.790 12703.902 - 12754.314: 97.3597% ( 12) 00:07:34.790 12754.314 - 12804.726: 97.4281% ( 12) 00:07:34.790 12804.726 - 12855.138: 97.5080% ( 14) 00:07:34.790 12855.138 - 12905.551: 97.5878% ( 14) 00:07:34.790 12905.551 - 13006.375: 97.7532% ( 29) 00:07:34.790 13006.375 - 13107.200: 97.8901% ( 24) 00:07:34.790 13107.200 - 13208.025: 98.0041% ( 20) 00:07:34.790 13208.025 - 13308.849: 98.1068% ( 18) 00:07:34.790 13308.849 - 13409.674: 98.1695% ( 11) 00:07:34.790 13409.674 - 13510.498: 98.1752% ( 1) 00:07:34.790 14821.218 - 14922.043: 98.1866% ( 2) 00:07:34.790 14922.043 - 15022.868: 98.2208% ( 6) 00:07:34.790 15022.868 - 15123.692: 98.2664% ( 8) 00:07:34.790 15123.692 - 15224.517: 98.3463% ( 14) 00:07:34.790 15224.517 - 15325.342: 98.4261% ( 14) 00:07:34.790 15325.342 - 15426.166: 98.5002% ( 13) 00:07:34.790 15426.166 - 15526.991: 98.5801% ( 14) 00:07:34.790 15526.991 - 15627.815: 98.6542% ( 13) 00:07:34.790 15627.815 - 15728.640: 98.7340% ( 14) 00:07:34.790 15728.640 - 15829.465: 98.8139% ( 14) 00:07:34.790 15829.465 - 15930.289: 98.8652% ( 9) 00:07:34.790 15930.289 - 16031.114: 98.9051% ( 7) 00:07:34.790 16031.114 - 16131.938: 98.9735% ( 12) 00:07:34.790 16131.938 - 16232.763: 98.9964% ( 4) 00:07:34.790 16232.763 - 16333.588: 99.0363% ( 7) 00:07:34.790 16333.588 - 16434.412: 99.0705% ( 6) 00:07:34.790 16434.412 - 16535.237: 99.1104% ( 7) 00:07:34.790 16535.237 - 16636.062: 99.1446% ( 6) 00:07:34.790 16636.062 - 16736.886: 99.1845% ( 7) 00:07:34.790 16736.886 - 16837.711: 99.2245% ( 7) 00:07:34.790 16837.711 - 16938.535: 99.2530% ( 5) 00:07:34.790 16938.535 - 17039.360: 99.2701% ( 3) 00:07:34.790 22282.240 - 22383.065: 99.2872% ( 3) 00:07:34.790 22383.065 - 22483.889: 99.3100% ( 4) 00:07:34.790 22483.889 - 22584.714: 99.3328% ( 4) 00:07:34.790 22584.714 - 22685.538: 99.3556% ( 4) 00:07:34.790 22685.538 - 22786.363: 99.3784% ( 4) 00:07:34.790 22786.363 - 22887.188: 99.4069% ( 5) 00:07:34.790 22887.188 - 22988.012: 99.4297% ( 4) 00:07:34.790 22988.012 - 23088.837: 99.4526% ( 4) 00:07:34.790 23088.837 - 23189.662: 99.4811% ( 5) 00:07:34.790 23189.662 - 23290.486: 99.5039% ( 4) 00:07:34.790 23290.486 - 23391.311: 99.5267% ( 4) 00:07:34.790 23391.311 - 23492.135: 99.5495% ( 4) 00:07:34.790 23492.135 - 23592.960: 99.5723% ( 4) 00:07:34.790 23592.960 - 23693.785: 99.5951% ( 4) 00:07:34.790 23693.785 - 23794.609: 99.6179% ( 4) 00:07:34.790 23794.609 - 23895.434: 99.6350% ( 3) 00:07:34.790 26617.698 - 26819.348: 99.6750% ( 7) 00:07:34.790 26819.348 - 27020.997: 99.7263% ( 9) 00:07:34.790 27020.997 - 27222.646: 99.7719% ( 8) 00:07:34.790 27222.646 - 27424.295: 99.8175% ( 8) 00:07:34.790 27424.295 - 27625.945: 99.8631% ( 8) 00:07:34.790 27625.945 - 27827.594: 99.9088% ( 8) 00:07:34.790 27827.594 - 28029.243: 99.9601% ( 9) 00:07:34.790 28029.243 - 28230.892: 100.0000% ( 7) 00:07:34.790 00:07:34.790 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:34.790 ============================================================================== 00:07:34.790 Range in us Cumulative IO count 00:07:34.790 5167.262 - 5192.468: 0.0057% ( 1) 00:07:34.790 5192.468 - 5217.674: 0.0285% ( 4) 00:07:34.790 5217.674 - 5242.880: 0.0570% ( 5) 00:07:34.790 5242.880 - 5268.086: 0.1255% ( 12) 00:07:34.790 5268.086 - 5293.292: 0.2509% ( 22) 00:07:34.790 5293.292 - 5318.498: 0.4619% ( 37) 00:07:34.790 5318.498 - 5343.705: 0.5931% ( 23) 00:07:34.790 5343.705 - 5368.911: 0.7242% ( 23) 00:07:34.790 5368.911 - 5394.117: 0.8383% ( 20) 00:07:34.790 5394.117 - 5419.323: 0.9637% ( 22) 00:07:34.790 5419.323 - 5444.529: 1.1462% ( 32) 00:07:34.790 5444.529 - 5469.735: 1.2945% ( 26) 00:07:34.790 5469.735 - 5494.942: 1.4370% ( 25) 00:07:34.790 5494.942 - 5520.148: 1.5511% ( 20) 00:07:34.790 5520.148 - 5545.354: 1.7336% ( 32) 00:07:34.790 5545.354 - 5570.560: 1.9332% ( 35) 00:07:34.790 5570.560 - 5595.766: 2.1898% ( 45) 00:07:34.790 5595.766 - 5620.972: 2.5319% ( 60) 00:07:34.790 5620.972 - 5646.178: 3.0566% ( 92) 00:07:34.790 5646.178 - 5671.385: 3.7637% ( 124) 00:07:34.790 5671.385 - 5696.591: 4.4765% ( 125) 00:07:34.790 5696.591 - 5721.797: 5.5771% ( 193) 00:07:34.790 5721.797 - 5747.003: 6.6891% ( 195) 00:07:34.790 5747.003 - 5772.209: 7.7840% ( 192) 00:07:34.790 5772.209 - 5797.415: 9.1469% ( 239) 00:07:34.790 5797.415 - 5822.622: 10.4927% ( 236) 00:07:34.790 5822.622 - 5847.828: 11.8898% ( 245) 00:07:34.790 5847.828 - 5873.034: 13.3611% ( 258) 00:07:34.790 5873.034 - 5898.240: 14.7411% ( 242) 00:07:34.790 5898.240 - 5923.446: 16.2409% ( 263) 00:07:34.790 5923.446 - 5948.652: 17.6950% ( 255) 00:07:34.790 5948.652 - 5973.858: 19.1663% ( 258) 00:07:34.790 5973.858 - 5999.065: 20.7231% ( 273) 00:07:34.790 5999.065 - 6024.271: 22.3540% ( 286) 00:07:34.790 6024.271 - 6049.477: 23.9564% ( 281) 00:07:34.790 6049.477 - 6074.683: 25.7185% ( 309) 00:07:34.790 6074.683 - 6099.889: 27.4407% ( 302) 00:07:34.790 6099.889 - 6125.095: 29.1058% ( 292) 00:07:34.790 6125.095 - 6150.302: 30.7824% ( 294) 00:07:34.790 6150.302 - 6175.508: 32.4418% ( 291) 00:07:34.790 6175.508 - 6200.714: 34.0956% ( 290) 00:07:34.790 6200.714 - 6225.920: 35.8006% ( 299) 00:07:34.790 6225.920 - 6251.126: 37.5513% ( 307) 00:07:34.790 6251.126 - 6276.332: 39.3077% ( 308) 00:07:34.790 6276.332 - 6301.538: 41.0527% ( 306) 00:07:34.790 6301.538 - 6326.745: 42.8091% ( 308) 00:07:34.790 6326.745 - 6351.951: 44.6681% ( 326) 00:07:34.790 6351.951 - 6377.157: 46.4530% ( 313) 00:07:34.790 6377.157 - 6402.363: 48.2721% ( 319) 00:07:34.790 6402.363 - 6427.569: 49.9829% ( 300) 00:07:34.790 6427.569 - 6452.775: 51.5739% ( 279) 00:07:34.790 6452.775 - 6503.188: 54.4651% ( 507) 00:07:34.790 6503.188 - 6553.600: 56.8830% ( 424) 00:07:34.790 6553.600 - 6604.012: 58.9701% ( 366) 00:07:34.790 6604.012 - 6654.425: 60.7436% ( 311) 00:07:34.790 6654.425 - 6704.837: 62.2434% ( 263) 00:07:34.790 6704.837 - 6755.249: 63.7032% ( 256) 00:07:34.790 6755.249 - 6805.662: 64.9179% ( 213) 00:07:34.790 6805.662 - 6856.074: 65.9044% ( 173) 00:07:34.790 6856.074 - 6906.486: 66.7256% ( 144) 00:07:34.790 6906.486 - 6956.898: 67.4840% ( 133) 00:07:34.790 6956.898 - 7007.311: 68.2425% ( 133) 00:07:34.791 7007.311 - 7057.723: 68.9382% ( 122) 00:07:34.791 7057.723 - 7108.135: 69.6510% ( 125) 00:07:34.791 7108.135 - 7158.548: 70.3581% ( 124) 00:07:34.791 7158.548 - 7208.960: 70.9398% ( 102) 00:07:34.791 7208.960 - 7259.372: 71.4872% ( 96) 00:07:34.791 7259.372 - 7309.785: 72.0176% ( 93) 00:07:34.791 7309.785 - 7360.197: 72.5251% ( 89) 00:07:34.791 7360.197 - 7410.609: 73.0440% ( 91) 00:07:34.791 7410.609 - 7461.022: 73.4945% ( 79) 00:07:34.791 7461.022 - 7511.434: 73.9450% ( 79) 00:07:34.791 7511.434 - 7561.846: 74.3727% ( 75) 00:07:34.791 7561.846 - 7612.258: 74.7776% ( 71) 00:07:34.791 7612.258 - 7662.671: 75.1996% ( 74) 00:07:34.791 7662.671 - 7713.083: 75.6615% ( 81) 00:07:34.791 7713.083 - 7763.495: 76.0835% ( 74) 00:07:34.791 7763.495 - 7813.908: 76.4884% ( 71) 00:07:34.791 7813.908 - 7864.320: 76.9104% ( 74) 00:07:34.791 7864.320 - 7914.732: 77.2582% ( 61) 00:07:34.791 7914.732 - 7965.145: 77.5719% ( 55) 00:07:34.791 7965.145 - 8015.557: 77.9311% ( 63) 00:07:34.791 8015.557 - 8065.969: 78.3189% ( 68) 00:07:34.791 8065.969 - 8116.382: 78.7010% ( 67) 00:07:34.791 8116.382 - 8166.794: 79.0431% ( 60) 00:07:34.791 8166.794 - 8217.206: 79.3910% ( 61) 00:07:34.791 8217.206 - 8267.618: 79.7103% ( 56) 00:07:34.791 8267.618 - 8318.031: 80.0240% ( 55) 00:07:34.791 8318.031 - 8368.443: 80.3946% ( 65) 00:07:34.791 8368.443 - 8418.855: 80.7539% ( 63) 00:07:34.791 8418.855 - 8469.268: 81.0960% ( 60) 00:07:34.791 8469.268 - 8519.680: 81.3755% ( 49) 00:07:34.791 8519.680 - 8570.092: 81.6777% ( 53) 00:07:34.791 8570.092 - 8620.505: 81.9742% ( 52) 00:07:34.791 8620.505 - 8670.917: 82.2251% ( 44) 00:07:34.791 8670.917 - 8721.329: 82.5217% ( 52) 00:07:34.791 8721.329 - 8771.742: 82.8125% ( 51) 00:07:34.791 8771.742 - 8822.154: 83.1090% ( 52) 00:07:34.791 8822.154 - 8872.566: 83.3771% ( 47) 00:07:34.791 8872.566 - 8922.978: 83.6622% ( 50) 00:07:34.791 8922.978 - 8973.391: 84.0100% ( 61) 00:07:34.791 8973.391 - 9023.803: 84.3522% ( 60) 00:07:34.791 9023.803 - 9074.215: 84.6544% ( 53) 00:07:34.791 9074.215 - 9124.628: 84.9567% ( 53) 00:07:34.791 9124.628 - 9175.040: 85.2703% ( 55) 00:07:34.791 9175.040 - 9225.452: 85.6068% ( 59) 00:07:34.791 9225.452 - 9275.865: 85.8976% ( 51) 00:07:34.791 9275.865 - 9326.277: 86.1827% ( 50) 00:07:34.791 9326.277 - 9376.689: 86.4450% ( 46) 00:07:34.791 9376.689 - 9427.102: 86.7016% ( 45) 00:07:34.791 9427.102 - 9477.514: 86.9583% ( 45) 00:07:34.791 9477.514 - 9527.926: 87.2206% ( 46) 00:07:34.791 9527.926 - 9578.338: 87.5114% ( 51) 00:07:34.791 9578.338 - 9628.751: 87.7794% ( 47) 00:07:34.791 9628.751 - 9679.163: 88.0531% ( 48) 00:07:34.791 9679.163 - 9729.575: 88.3155% ( 46) 00:07:34.791 9729.575 - 9779.988: 88.5949% ( 49) 00:07:34.791 9779.988 - 9830.400: 88.8230% ( 40) 00:07:34.791 9830.400 - 9880.812: 89.0568% ( 41) 00:07:34.791 9880.812 - 9931.225: 89.2564% ( 35) 00:07:34.791 9931.225 - 9981.637: 89.4560% ( 35) 00:07:34.791 9981.637 - 10032.049: 89.6898% ( 41) 00:07:34.791 10032.049 - 10082.462: 89.8951% ( 36) 00:07:34.791 10082.462 - 10132.874: 90.1061% ( 37) 00:07:34.791 10132.874 - 10183.286: 90.3228% ( 38) 00:07:34.791 10183.286 - 10233.698: 90.5566% ( 41) 00:07:34.791 10233.698 - 10284.111: 90.7790% ( 39) 00:07:34.791 10284.111 - 10334.523: 91.0071% ( 40) 00:07:34.791 10334.523 - 10384.935: 91.2010% ( 34) 00:07:34.791 10384.935 - 10435.348: 91.3891% ( 33) 00:07:34.791 10435.348 - 10485.760: 91.6058% ( 38) 00:07:34.791 10485.760 - 10536.172: 91.7997% ( 34) 00:07:34.791 10536.172 - 10586.585: 92.0278% ( 40) 00:07:34.791 10586.585 - 10636.997: 92.1875% ( 28) 00:07:34.791 10636.997 - 10687.409: 92.3187% ( 23) 00:07:34.791 10687.409 - 10737.822: 92.4498% ( 23) 00:07:34.791 10737.822 - 10788.234: 92.5582% ( 19) 00:07:34.791 10788.234 - 10838.646: 92.6836% ( 22) 00:07:34.791 10838.646 - 10889.058: 92.8148% ( 23) 00:07:34.791 10889.058 - 10939.471: 92.9745% ( 28) 00:07:34.791 10939.471 - 10989.883: 93.0942% ( 21) 00:07:34.791 10989.883 - 11040.295: 93.2767% ( 32) 00:07:34.791 11040.295 - 11090.708: 93.4193% ( 25) 00:07:34.791 11090.708 - 11141.120: 93.5561% ( 24) 00:07:34.791 11141.120 - 11191.532: 93.7215% ( 29) 00:07:34.791 11191.532 - 11241.945: 93.8698% ( 26) 00:07:34.791 11241.945 - 11292.357: 94.0180% ( 26) 00:07:34.791 11292.357 - 11342.769: 94.1549% ( 24) 00:07:34.791 11342.769 - 11393.182: 94.3431% ( 33) 00:07:34.791 11393.182 - 11443.594: 94.5312% ( 33) 00:07:34.791 11443.594 - 11494.006: 94.7080% ( 31) 00:07:34.791 11494.006 - 11544.418: 94.9019% ( 34) 00:07:34.791 11544.418 - 11594.831: 95.0616% ( 28) 00:07:34.791 11594.831 - 11645.243: 95.2156% ( 27) 00:07:34.791 11645.243 - 11695.655: 95.3752% ( 28) 00:07:34.791 11695.655 - 11746.068: 95.5121% ( 24) 00:07:34.791 11746.068 - 11796.480: 95.6604% ( 26) 00:07:34.791 11796.480 - 11846.892: 95.7858% ( 22) 00:07:34.791 11846.892 - 11897.305: 95.9227% ( 24) 00:07:34.791 11897.305 - 11947.717: 96.0538% ( 23) 00:07:34.791 11947.717 - 11998.129: 96.1622% ( 19) 00:07:34.791 11998.129 - 12048.542: 96.2990% ( 24) 00:07:34.791 12048.542 - 12098.954: 96.4245% ( 22) 00:07:34.791 12098.954 - 12149.366: 96.5271% ( 18) 00:07:34.791 12149.366 - 12199.778: 96.6241% ( 17) 00:07:34.791 12199.778 - 12250.191: 96.6811% ( 10) 00:07:34.791 12250.191 - 12300.603: 96.7153% ( 6) 00:07:34.791 12300.603 - 12351.015: 96.7438% ( 5) 00:07:34.791 12351.015 - 12401.428: 96.7838% ( 7) 00:07:34.791 12401.428 - 12451.840: 96.8351% ( 9) 00:07:34.791 12451.840 - 12502.252: 96.8807% ( 8) 00:07:34.791 12502.252 - 12552.665: 96.9263% ( 8) 00:07:34.791 12552.665 - 12603.077: 96.9719% ( 8) 00:07:34.791 12603.077 - 12653.489: 97.0233% ( 9) 00:07:34.791 12653.489 - 12703.902: 97.0575% ( 6) 00:07:34.791 12703.902 - 12754.314: 97.1259% ( 12) 00:07:34.791 12754.314 - 12804.726: 97.1829% ( 10) 00:07:34.791 12804.726 - 12855.138: 97.2400% ( 10) 00:07:34.791 12855.138 - 12905.551: 97.3027% ( 11) 00:07:34.791 12905.551 - 13006.375: 97.4453% ( 25) 00:07:34.791 13006.375 - 13107.200: 97.5764% ( 23) 00:07:34.791 13107.200 - 13208.025: 97.6848% ( 19) 00:07:34.791 13208.025 - 13308.849: 97.7931% ( 19) 00:07:34.791 13308.849 - 13409.674: 97.8844% ( 16) 00:07:34.791 13409.674 - 13510.498: 97.9756% ( 16) 00:07:34.791 13510.498 - 13611.323: 98.0725% ( 17) 00:07:34.791 13611.323 - 13712.148: 98.1296% ( 10) 00:07:34.791 13712.148 - 13812.972: 98.1524% ( 4) 00:07:34.791 13812.972 - 13913.797: 98.1695% ( 3) 00:07:34.791 13913.797 - 14014.622: 98.1752% ( 1) 00:07:34.791 14619.569 - 14720.394: 98.1809% ( 1) 00:07:34.791 14720.394 - 14821.218: 98.2151% ( 6) 00:07:34.791 14821.218 - 14922.043: 98.2550% ( 7) 00:07:34.791 14922.043 - 15022.868: 98.3292% ( 13) 00:07:34.791 15022.868 - 15123.692: 98.3976% ( 12) 00:07:34.791 15123.692 - 15224.517: 98.4717% ( 13) 00:07:34.791 15224.517 - 15325.342: 98.5458% ( 13) 00:07:34.791 15325.342 - 15426.166: 98.6257% ( 14) 00:07:34.791 15426.166 - 15526.991: 98.6998% ( 13) 00:07:34.791 15526.991 - 15627.815: 98.7740% ( 13) 00:07:34.791 15627.815 - 15728.640: 98.8253% ( 9) 00:07:34.791 15728.640 - 15829.465: 98.8766% ( 9) 00:07:34.791 15829.465 - 15930.289: 98.9450% ( 12) 00:07:34.791 15930.289 - 16031.114: 98.9849% ( 7) 00:07:34.791 16031.114 - 16131.938: 99.0192% ( 6) 00:07:34.791 16131.938 - 16232.763: 99.0534% ( 6) 00:07:34.791 16232.763 - 16333.588: 99.0933% ( 7) 00:07:34.791 16333.588 - 16434.412: 99.1275% ( 6) 00:07:34.791 16434.412 - 16535.237: 99.1674% ( 7) 00:07:34.791 16535.237 - 16636.062: 99.2016% ( 6) 00:07:34.791 16636.062 - 16736.886: 99.2359% ( 6) 00:07:34.791 16736.886 - 16837.711: 99.2644% ( 5) 00:07:34.791 16837.711 - 16938.535: 99.2701% ( 1) 00:07:34.791 20870.695 - 20971.520: 99.2815% ( 2) 00:07:34.791 20971.520 - 21072.345: 99.3100% ( 5) 00:07:34.791 21072.345 - 21173.169: 99.3328% ( 4) 00:07:34.791 21173.169 - 21273.994: 99.3556% ( 4) 00:07:34.791 21273.994 - 21374.818: 99.3784% ( 4) 00:07:34.791 21374.818 - 21475.643: 99.4012% ( 4) 00:07:34.791 21475.643 - 21576.468: 99.4240% ( 4) 00:07:34.791 21576.468 - 21677.292: 99.4469% ( 4) 00:07:34.791 21677.292 - 21778.117: 99.4697% ( 4) 00:07:34.791 21778.117 - 21878.942: 99.4982% ( 5) 00:07:34.791 21878.942 - 21979.766: 99.5210% ( 4) 00:07:34.791 21979.766 - 22080.591: 99.5438% ( 4) 00:07:34.791 22080.591 - 22181.415: 99.5666% ( 4) 00:07:34.791 22181.415 - 22282.240: 99.5780% ( 2) 00:07:34.791 22282.240 - 22383.065: 99.6008% ( 4) 00:07:34.791 22383.065 - 22483.889: 99.6236% ( 4) 00:07:34.791 22483.889 - 22584.714: 99.6350% ( 2) 00:07:34.791 25206.154 - 25306.978: 99.6521% ( 3) 00:07:34.791 25306.978 - 25407.803: 99.6750% ( 4) 00:07:34.791 25407.803 - 25508.628: 99.6978% ( 4) 00:07:34.791 25508.628 - 25609.452: 99.7206% ( 4) 00:07:34.791 25609.452 - 25710.277: 99.7434% ( 4) 00:07:34.791 25710.277 - 25811.102: 99.7662% ( 4) 00:07:34.791 25811.102 - 26012.751: 99.8175% ( 9) 00:07:34.791 26012.751 - 26214.400: 99.8631% ( 8) 00:07:34.791 26214.400 - 26416.049: 99.9088% ( 8) 00:07:34.791 26416.049 - 26617.698: 99.9544% ( 8) 00:07:34.791 26617.698 - 26819.348: 100.0000% ( 8) 00:07:34.791 00:07:34.791 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:34.791 ============================================================================== 00:07:34.791 Range in us Cumulative IO count 00:07:34.791 5167.262 - 5192.468: 0.0171% ( 3) 00:07:34.791 5192.468 - 5217.674: 0.0285% ( 2) 00:07:34.791 5217.674 - 5242.880: 0.0798% ( 9) 00:07:34.791 5242.880 - 5268.086: 0.1597% ( 14) 00:07:34.791 5268.086 - 5293.292: 0.2623% ( 18) 00:07:34.791 5293.292 - 5318.498: 0.3536% ( 16) 00:07:34.791 5318.498 - 5343.705: 0.4676% ( 20) 00:07:34.791 5343.705 - 5368.911: 0.6501% ( 32) 00:07:34.791 5368.911 - 5394.117: 0.7984% ( 26) 00:07:34.791 5394.117 - 5419.323: 0.9637% ( 29) 00:07:34.791 5419.323 - 5444.529: 1.1177% ( 27) 00:07:34.791 5444.529 - 5469.735: 1.2717% ( 27) 00:07:34.791 5469.735 - 5494.942: 1.4484% ( 31) 00:07:34.791 5494.942 - 5520.148: 1.6309% ( 32) 00:07:34.792 5520.148 - 5545.354: 1.7906% ( 28) 00:07:34.792 5545.354 - 5570.560: 1.9617% ( 30) 00:07:34.792 5570.560 - 5595.766: 2.2012% ( 42) 00:07:34.792 5595.766 - 5620.972: 2.5776% ( 66) 00:07:34.792 5620.972 - 5646.178: 3.1421% ( 99) 00:07:34.792 5646.178 - 5671.385: 3.8663% ( 127) 00:07:34.792 5671.385 - 5696.591: 4.5734% ( 124) 00:07:34.792 5696.591 - 5721.797: 5.5657% ( 174) 00:07:34.792 5721.797 - 5747.003: 6.5865% ( 179) 00:07:34.792 5747.003 - 5772.209: 7.6927% ( 194) 00:07:34.792 5772.209 - 5797.415: 8.9758% ( 225) 00:07:34.792 5797.415 - 5822.622: 10.4072% ( 251) 00:07:34.792 5822.622 - 5847.828: 11.7872% ( 242) 00:07:34.792 5847.828 - 5873.034: 13.2527% ( 257) 00:07:34.792 5873.034 - 5898.240: 14.7867% ( 269) 00:07:34.792 5898.240 - 5923.446: 16.2010% ( 248) 00:07:34.792 5923.446 - 5948.652: 17.6722% ( 258) 00:07:34.792 5948.652 - 5973.858: 19.2176% ( 271) 00:07:34.792 5973.858 - 5999.065: 20.8314% ( 283) 00:07:34.792 5999.065 - 6024.271: 22.5479% ( 301) 00:07:34.792 6024.271 - 6049.477: 24.1959% ( 289) 00:07:34.792 6049.477 - 6074.683: 25.8326% ( 287) 00:07:34.792 6074.683 - 6099.889: 27.5490% ( 301) 00:07:34.792 6099.889 - 6125.095: 29.3168% ( 310) 00:07:34.792 6125.095 - 6150.302: 31.1359% ( 319) 00:07:34.792 6150.302 - 6175.508: 32.8524% ( 301) 00:07:34.792 6175.508 - 6200.714: 34.6145% ( 309) 00:07:34.792 6200.714 - 6225.920: 36.3253% ( 300) 00:07:34.792 6225.920 - 6251.126: 38.0589% ( 304) 00:07:34.792 6251.126 - 6276.332: 39.8209% ( 309) 00:07:34.792 6276.332 - 6301.538: 41.5317% ( 300) 00:07:34.792 6301.538 - 6326.745: 43.2368% ( 299) 00:07:34.792 6326.745 - 6351.951: 45.1129% ( 329) 00:07:34.792 6351.951 - 6377.157: 46.8522% ( 305) 00:07:34.792 6377.157 - 6402.363: 48.6314% ( 312) 00:07:34.792 6402.363 - 6427.569: 50.3136% ( 295) 00:07:34.792 6427.569 - 6452.775: 51.9389% ( 285) 00:07:34.792 6452.775 - 6503.188: 54.8643% ( 513) 00:07:34.792 6503.188 - 6553.600: 57.3848% ( 442) 00:07:34.792 6553.600 - 6604.012: 59.6373% ( 395) 00:07:34.792 6604.012 - 6654.425: 61.4279% ( 314) 00:07:34.792 6654.425 - 6704.837: 62.9904% ( 274) 00:07:34.792 6704.837 - 6755.249: 64.4332% ( 253) 00:07:34.792 6755.249 - 6805.662: 65.7276% ( 227) 00:07:34.792 6805.662 - 6856.074: 66.6971% ( 170) 00:07:34.792 6856.074 - 6906.486: 67.4840% ( 138) 00:07:34.792 6906.486 - 6956.898: 68.1455% ( 116) 00:07:34.792 6956.898 - 7007.311: 68.7785% ( 111) 00:07:34.792 7007.311 - 7057.723: 69.3431% ( 99) 00:07:34.792 7057.723 - 7108.135: 69.9304% ( 103) 00:07:34.792 7108.135 - 7158.548: 70.5292% ( 105) 00:07:34.792 7158.548 - 7208.960: 71.1109% ( 102) 00:07:34.792 7208.960 - 7259.372: 71.6640% ( 97) 00:07:34.792 7259.372 - 7309.785: 72.1259% ( 81) 00:07:34.792 7309.785 - 7360.197: 72.5821% ( 80) 00:07:34.792 7360.197 - 7410.609: 73.0269% ( 78) 00:07:34.792 7410.609 - 7461.022: 73.4603% ( 76) 00:07:34.792 7461.022 - 7511.434: 73.8880% ( 75) 00:07:34.792 7511.434 - 7561.846: 74.2758% ( 68) 00:07:34.792 7561.846 - 7612.258: 74.6693% ( 69) 00:07:34.792 7612.258 - 7662.671: 75.1312% ( 81) 00:07:34.792 7662.671 - 7713.083: 75.5132% ( 67) 00:07:34.792 7713.083 - 7763.495: 75.9010% ( 68) 00:07:34.792 7763.495 - 7813.908: 76.3059% ( 71) 00:07:34.792 7813.908 - 7864.320: 76.7450% ( 77) 00:07:34.792 7864.320 - 7914.732: 77.1156% ( 65) 00:07:34.792 7914.732 - 7965.145: 77.4635% ( 61) 00:07:34.792 7965.145 - 8015.557: 77.8171% ( 62) 00:07:34.792 8015.557 - 8065.969: 78.1820% ( 64) 00:07:34.792 8065.969 - 8116.382: 78.5128% ( 58) 00:07:34.792 8116.382 - 8166.794: 78.8093% ( 52) 00:07:34.792 8166.794 - 8217.206: 79.1515% ( 60) 00:07:34.792 8217.206 - 8267.618: 79.4594% ( 54) 00:07:34.792 8267.618 - 8318.031: 79.7445% ( 50) 00:07:34.792 8318.031 - 8368.443: 80.0525% ( 54) 00:07:34.792 8368.443 - 8418.855: 80.3547% ( 53) 00:07:34.792 8418.855 - 8469.268: 80.7026% ( 61) 00:07:34.792 8469.268 - 8519.680: 81.0447% ( 60) 00:07:34.792 8519.680 - 8570.092: 81.3583% ( 55) 00:07:34.792 8570.092 - 8620.505: 81.6834% ( 57) 00:07:34.792 8620.505 - 8670.917: 82.0712% ( 68) 00:07:34.792 8670.917 - 8721.329: 82.4076% ( 59) 00:07:34.792 8721.329 - 8771.742: 82.7156% ( 54) 00:07:34.792 8771.742 - 8822.154: 83.0691% ( 62) 00:07:34.792 8822.154 - 8872.566: 83.3599% ( 51) 00:07:34.792 8872.566 - 8922.978: 83.6736% ( 55) 00:07:34.792 8922.978 - 8973.391: 84.0500% ( 66) 00:07:34.792 8973.391 - 9023.803: 84.3921% ( 60) 00:07:34.792 9023.803 - 9074.215: 84.6943% ( 53) 00:07:34.792 9074.215 - 9124.628: 84.9966% ( 53) 00:07:34.792 9124.628 - 9175.040: 85.2703% ( 48) 00:07:34.792 9175.040 - 9225.452: 85.5611% ( 51) 00:07:34.792 9225.452 - 9275.865: 85.8748% ( 55) 00:07:34.792 9275.865 - 9326.277: 86.1998% ( 57) 00:07:34.792 9326.277 - 9376.689: 86.4507% ( 44) 00:07:34.792 9376.689 - 9427.102: 86.7016% ( 44) 00:07:34.792 9427.102 - 9477.514: 86.8955% ( 34) 00:07:34.792 9477.514 - 9527.926: 87.0951% ( 35) 00:07:34.792 9527.926 - 9578.338: 87.3631% ( 47) 00:07:34.792 9578.338 - 9628.751: 87.6312% ( 47) 00:07:34.792 9628.751 - 9679.163: 87.8878% ( 45) 00:07:34.792 9679.163 - 9729.575: 88.0760% ( 33) 00:07:34.792 9729.575 - 9779.988: 88.2984% ( 39) 00:07:34.792 9779.988 - 9830.400: 88.4922% ( 34) 00:07:34.792 9830.400 - 9880.812: 88.6975% ( 36) 00:07:34.792 9880.812 - 9931.225: 88.9313% ( 41) 00:07:34.792 9931.225 - 9981.637: 89.1594% ( 40) 00:07:34.792 9981.637 - 10032.049: 89.4104% ( 44) 00:07:34.792 10032.049 - 10082.462: 89.6670% ( 45) 00:07:34.792 10082.462 - 10132.874: 89.8723% ( 36) 00:07:34.792 10132.874 - 10183.286: 90.0947% ( 39) 00:07:34.792 10183.286 - 10233.698: 90.2771% ( 32) 00:07:34.792 10233.698 - 10284.111: 90.4767% ( 35) 00:07:34.792 10284.111 - 10334.523: 90.6535% ( 31) 00:07:34.792 10334.523 - 10384.935: 90.8075% ( 27) 00:07:34.792 10384.935 - 10435.348: 90.9500% ( 25) 00:07:34.792 10435.348 - 10485.760: 91.1097% ( 28) 00:07:34.792 10485.760 - 10536.172: 91.2694% ( 28) 00:07:34.792 10536.172 - 10586.585: 91.5260% ( 45) 00:07:34.792 10586.585 - 10636.997: 91.7370% ( 37) 00:07:34.792 10636.997 - 10687.409: 91.9195% ( 32) 00:07:34.792 10687.409 - 10737.822: 92.1020% ( 32) 00:07:34.792 10737.822 - 10788.234: 92.2958% ( 34) 00:07:34.792 10788.234 - 10838.646: 92.4840% ( 33) 00:07:34.792 10838.646 - 10889.058: 92.6608% ( 31) 00:07:34.792 10889.058 - 10939.471: 92.8433% ( 32) 00:07:34.792 10939.471 - 10989.883: 93.0258% ( 32) 00:07:34.792 10989.883 - 11040.295: 93.2026% ( 31) 00:07:34.792 11040.295 - 11090.708: 93.3622% ( 28) 00:07:34.792 11090.708 - 11141.120: 93.5276% ( 29) 00:07:34.792 11141.120 - 11191.532: 93.6873% ( 28) 00:07:34.792 11191.532 - 11241.945: 93.8583% ( 30) 00:07:34.792 11241.945 - 11292.357: 94.0009% ( 25) 00:07:34.792 11292.357 - 11342.769: 94.1549% ( 27) 00:07:34.792 11342.769 - 11393.182: 94.3031% ( 26) 00:07:34.792 11393.182 - 11443.594: 94.5084% ( 36) 00:07:34.792 11443.594 - 11494.006: 94.6510% ( 25) 00:07:34.792 11494.006 - 11544.418: 94.7936% ( 25) 00:07:34.792 11544.418 - 11594.831: 94.9247% ( 23) 00:07:34.792 11594.831 - 11645.243: 95.0673% ( 25) 00:07:34.792 11645.243 - 11695.655: 95.2555% ( 33) 00:07:34.792 11695.655 - 11746.068: 95.4094% ( 27) 00:07:34.792 11746.068 - 11796.480: 95.5406% ( 23) 00:07:34.792 11796.480 - 11846.892: 95.6775% ( 24) 00:07:34.792 11846.892 - 11897.305: 95.7972% ( 21) 00:07:34.792 11897.305 - 11947.717: 95.9113% ( 20) 00:07:34.792 11947.717 - 11998.129: 96.0310% ( 21) 00:07:34.792 11998.129 - 12048.542: 96.1166% ( 15) 00:07:34.792 12048.542 - 12098.954: 96.1964% ( 14) 00:07:34.792 12098.954 - 12149.366: 96.2705% ( 13) 00:07:34.792 12149.366 - 12199.778: 96.3618% ( 16) 00:07:34.792 12199.778 - 12250.191: 96.4188% ( 10) 00:07:34.792 12250.191 - 12300.603: 96.4815% ( 11) 00:07:34.792 12300.603 - 12351.015: 96.5443% ( 11) 00:07:34.792 12351.015 - 12401.428: 96.6184% ( 13) 00:07:34.792 12401.428 - 12451.840: 96.6811% ( 11) 00:07:34.792 12451.840 - 12502.252: 96.7210% ( 7) 00:07:34.792 12502.252 - 12552.665: 96.7724% ( 9) 00:07:34.792 12552.665 - 12603.077: 96.8294% ( 10) 00:07:34.792 12603.077 - 12653.489: 96.8636% ( 6) 00:07:34.792 12653.489 - 12703.902: 96.9035% ( 7) 00:07:34.792 12703.902 - 12754.314: 96.9320% ( 5) 00:07:34.792 12754.314 - 12804.726: 96.9662% ( 6) 00:07:34.792 12804.726 - 12855.138: 97.0062% ( 7) 00:07:34.793 12855.138 - 12905.551: 97.0632% ( 10) 00:07:34.793 12905.551 - 13006.375: 97.1886% ( 22) 00:07:34.793 13006.375 - 13107.200: 97.2970% ( 19) 00:07:34.793 13107.200 - 13208.025: 97.4110% ( 20) 00:07:34.793 13208.025 - 13308.849: 97.5365% ( 22) 00:07:34.793 13308.849 - 13409.674: 97.6962% ( 28) 00:07:34.793 13409.674 - 13510.498: 97.8615% ( 29) 00:07:34.793 13510.498 - 13611.323: 97.9471% ( 15) 00:07:34.793 13611.323 - 13712.148: 97.9984% ( 9) 00:07:34.793 13712.148 - 13812.972: 98.0497% ( 9) 00:07:34.793 13812.972 - 13913.797: 98.0896% ( 7) 00:07:34.793 13913.797 - 14014.622: 98.1296% ( 7) 00:07:34.793 14014.622 - 14115.446: 98.1695% ( 7) 00:07:34.793 14115.446 - 14216.271: 98.1752% ( 1) 00:07:34.793 14518.745 - 14619.569: 98.1923% ( 3) 00:07:34.793 14619.569 - 14720.394: 98.2322% ( 7) 00:07:34.793 14720.394 - 14821.218: 98.2721% ( 7) 00:07:34.793 14821.218 - 14922.043: 98.3349% ( 11) 00:07:34.793 14922.043 - 15022.868: 98.4147% ( 14) 00:07:34.793 15022.868 - 15123.692: 98.4945% ( 14) 00:07:34.793 15123.692 - 15224.517: 98.5516% ( 10) 00:07:34.793 15224.517 - 15325.342: 98.6371% ( 15) 00:07:34.793 15325.342 - 15426.166: 98.7169% ( 14) 00:07:34.793 15426.166 - 15526.991: 98.7740% ( 10) 00:07:34.793 15526.991 - 15627.815: 98.8139% ( 7) 00:07:34.793 15627.815 - 15728.640: 98.8538% ( 7) 00:07:34.793 15728.640 - 15829.465: 98.8880% ( 6) 00:07:34.793 15829.465 - 15930.289: 98.9051% ( 3) 00:07:34.793 15930.289 - 16031.114: 98.9336% ( 5) 00:07:34.793 16031.114 - 16131.938: 98.9792% ( 8) 00:07:34.793 16131.938 - 16232.763: 99.0135% ( 6) 00:07:34.793 16232.763 - 16333.588: 99.0648% ( 9) 00:07:34.793 16333.588 - 16434.412: 99.1161% ( 9) 00:07:34.793 16434.412 - 16535.237: 99.1503% ( 6) 00:07:34.793 16535.237 - 16636.062: 99.1845% ( 6) 00:07:34.793 16636.062 - 16736.886: 99.2130% ( 5) 00:07:34.793 16736.886 - 16837.711: 99.2416% ( 5) 00:07:34.793 16837.711 - 16938.535: 99.2701% ( 5) 00:07:34.793 19055.852 - 19156.677: 99.2758% ( 1) 00:07:34.793 19156.677 - 19257.502: 99.2986% ( 4) 00:07:34.793 19257.502 - 19358.326: 99.3214% ( 4) 00:07:34.793 19358.326 - 19459.151: 99.3442% ( 4) 00:07:34.793 19459.151 - 19559.975: 99.3670% ( 4) 00:07:34.793 19559.975 - 19660.800: 99.3955% ( 5) 00:07:34.793 19660.800 - 19761.625: 99.4183% ( 4) 00:07:34.793 19761.625 - 19862.449: 99.4411% ( 4) 00:07:34.793 19862.449 - 19963.274: 99.4640% ( 4) 00:07:34.793 19963.274 - 20064.098: 99.4868% ( 4) 00:07:34.793 20064.098 - 20164.923: 99.5153% ( 5) 00:07:34.793 20164.923 - 20265.748: 99.5381% ( 4) 00:07:34.793 20265.748 - 20366.572: 99.5609% ( 4) 00:07:34.793 20366.572 - 20467.397: 99.5837% ( 4) 00:07:34.793 20467.397 - 20568.222: 99.6065% ( 4) 00:07:34.793 20568.222 - 20669.046: 99.6293% ( 4) 00:07:34.793 20669.046 - 20769.871: 99.6350% ( 1) 00:07:34.793 23391.311 - 23492.135: 99.6521% ( 3) 00:07:34.793 23492.135 - 23592.960: 99.6750% ( 4) 00:07:34.793 23592.960 - 23693.785: 99.6978% ( 4) 00:07:34.793 23693.785 - 23794.609: 99.7263% ( 5) 00:07:34.793 23794.609 - 23895.434: 99.7491% ( 4) 00:07:34.793 23895.434 - 23996.258: 99.7719% ( 4) 00:07:34.793 23996.258 - 24097.083: 99.7947% ( 4) 00:07:34.793 24097.083 - 24197.908: 99.8175% ( 4) 00:07:34.793 24197.908 - 24298.732: 99.8460% ( 5) 00:07:34.793 24298.732 - 24399.557: 99.8631% ( 3) 00:07:34.793 24399.557 - 24500.382: 99.8859% ( 4) 00:07:34.793 24500.382 - 24601.206: 99.9145% ( 5) 00:07:34.793 24601.206 - 24702.031: 99.9373% ( 4) 00:07:34.793 24702.031 - 24802.855: 99.9601% ( 4) 00:07:34.793 24802.855 - 24903.680: 99.9829% ( 4) 00:07:34.793 24903.680 - 25004.505: 100.0000% ( 3) 00:07:34.793 00:07:34.793 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:34.793 ============================================================================== 00:07:34.793 Range in us Cumulative IO count 00:07:34.793 5142.055 - 5167.262: 0.0171% ( 3) 00:07:34.793 5167.262 - 5192.468: 0.0285% ( 2) 00:07:34.793 5192.468 - 5217.674: 0.0456% ( 3) 00:07:34.793 5217.674 - 5242.880: 0.0855% ( 7) 00:07:34.793 5242.880 - 5268.086: 0.1825% ( 17) 00:07:34.793 5268.086 - 5293.292: 0.2224% ( 7) 00:07:34.793 5293.292 - 5318.498: 0.2737% ( 9) 00:07:34.793 5318.498 - 5343.705: 0.3935% ( 21) 00:07:34.793 5343.705 - 5368.911: 0.5589% ( 29) 00:07:34.793 5368.911 - 5394.117: 0.7242% ( 29) 00:07:34.793 5394.117 - 5419.323: 0.9067% ( 32) 00:07:34.793 5419.323 - 5444.529: 1.0436% ( 24) 00:07:34.793 5444.529 - 5469.735: 1.2032% ( 28) 00:07:34.793 5469.735 - 5494.942: 1.3629% ( 28) 00:07:34.793 5494.942 - 5520.148: 1.4998% ( 24) 00:07:34.793 5520.148 - 5545.354: 1.6594% ( 28) 00:07:34.793 5545.354 - 5570.560: 1.8419% ( 32) 00:07:34.793 5570.560 - 5595.766: 2.1156% ( 48) 00:07:34.793 5595.766 - 5620.972: 2.5661% ( 79) 00:07:34.793 5620.972 - 5646.178: 3.1535% ( 103) 00:07:34.793 5646.178 - 5671.385: 3.8093% ( 115) 00:07:34.793 5671.385 - 5696.591: 4.7673% ( 168) 00:07:34.793 5696.591 - 5721.797: 5.6854% ( 161) 00:07:34.793 5721.797 - 5747.003: 6.7860% ( 193) 00:07:34.793 5747.003 - 5772.209: 7.9151% ( 198) 00:07:34.793 5772.209 - 5797.415: 9.0443% ( 198) 00:07:34.793 5797.415 - 5822.622: 10.2703% ( 215) 00:07:34.793 5822.622 - 5847.828: 11.5876% ( 231) 00:07:34.793 5847.828 - 5873.034: 13.0075% ( 249) 00:07:34.793 5873.034 - 5898.240: 14.3990% ( 244) 00:07:34.793 5898.240 - 5923.446: 15.9044% ( 264) 00:07:34.793 5923.446 - 5948.652: 17.3529% ( 254) 00:07:34.793 5948.652 - 5973.858: 18.9325% ( 277) 00:07:34.793 5973.858 - 5999.065: 20.5634% ( 286) 00:07:34.793 5999.065 - 6024.271: 22.2628% ( 298) 00:07:34.793 6024.271 - 6049.477: 23.9450% ( 295) 00:07:34.793 6049.477 - 6074.683: 25.7128% ( 310) 00:07:34.793 6074.683 - 6099.889: 27.4521% ( 305) 00:07:34.793 6099.889 - 6125.095: 29.1629% ( 300) 00:07:34.793 6125.095 - 6150.302: 30.9078% ( 306) 00:07:34.793 6150.302 - 6175.508: 32.6870% ( 312) 00:07:34.793 6175.508 - 6200.714: 34.4149% ( 303) 00:07:34.793 6200.714 - 6225.920: 36.1599% ( 306) 00:07:34.793 6225.920 - 6251.126: 37.8878% ( 303) 00:07:34.793 6251.126 - 6276.332: 39.6385% ( 307) 00:07:34.793 6276.332 - 6301.538: 41.4062% ( 310) 00:07:34.793 6301.538 - 6326.745: 43.1740% ( 310) 00:07:34.793 6326.745 - 6351.951: 44.9361% ( 309) 00:07:34.793 6351.951 - 6377.157: 46.6925% ( 308) 00:07:34.793 6377.157 - 6402.363: 48.4717% ( 312) 00:07:34.793 6402.363 - 6427.569: 50.1597% ( 296) 00:07:34.793 6427.569 - 6452.775: 51.7564% ( 280) 00:07:34.793 6452.775 - 6503.188: 54.6191% ( 502) 00:07:34.793 6503.188 - 6553.600: 57.1396% ( 442) 00:07:34.793 6553.600 - 6604.012: 59.2895% ( 377) 00:07:34.793 6604.012 - 6654.425: 61.0915% ( 316) 00:07:34.793 6654.425 - 6704.837: 62.6996% ( 282) 00:07:34.793 6704.837 - 6755.249: 64.1651% ( 257) 00:07:34.793 6755.249 - 6805.662: 65.3057% ( 200) 00:07:34.793 6805.662 - 6856.074: 66.3264% ( 179) 00:07:34.793 6856.074 - 6906.486: 67.0506% ( 127) 00:07:34.793 6906.486 - 6956.898: 67.7292% ( 119) 00:07:34.793 6956.898 - 7007.311: 68.3679% ( 112) 00:07:34.793 7007.311 - 7057.723: 68.9553% ( 103) 00:07:34.793 7057.723 - 7108.135: 69.5541% ( 105) 00:07:34.793 7108.135 - 7158.548: 70.1357% ( 102) 00:07:34.793 7158.548 - 7208.960: 70.7630% ( 110) 00:07:34.793 7208.960 - 7259.372: 71.3333% ( 100) 00:07:34.793 7259.372 - 7309.785: 71.8807% ( 96) 00:07:34.793 7309.785 - 7360.197: 72.4339% ( 97) 00:07:34.793 7360.197 - 7410.609: 72.9756% ( 95) 00:07:34.793 7410.609 - 7461.022: 73.4831% ( 89) 00:07:34.793 7461.022 - 7511.434: 73.9678% ( 85) 00:07:34.793 7511.434 - 7561.846: 74.4411% ( 83) 00:07:34.793 7561.846 - 7612.258: 74.8460% ( 71) 00:07:34.793 7612.258 - 7662.671: 75.2851% ( 77) 00:07:34.793 7662.671 - 7713.083: 75.6558% ( 65) 00:07:34.793 7713.083 - 7763.495: 76.0094% ( 62) 00:07:34.793 7763.495 - 7813.908: 76.4028% ( 69) 00:07:34.793 7813.908 - 7864.320: 76.7963% ( 69) 00:07:34.793 7864.320 - 7914.732: 77.1499% ( 62) 00:07:34.793 7914.732 - 7965.145: 77.4920% ( 60) 00:07:34.793 7965.145 - 8015.557: 77.8342% ( 60) 00:07:34.793 8015.557 - 8065.969: 78.1991% ( 64) 00:07:34.793 8065.969 - 8116.382: 78.5584% ( 63) 00:07:34.793 8116.382 - 8166.794: 78.9462% ( 68) 00:07:34.793 8166.794 - 8217.206: 79.3111% ( 64) 00:07:34.793 8217.206 - 8267.618: 79.6476% ( 59) 00:07:34.793 8267.618 - 8318.031: 80.0068% ( 63) 00:07:34.793 8318.031 - 8368.443: 80.3148% ( 54) 00:07:34.793 8368.443 - 8418.855: 80.5885% ( 48) 00:07:34.793 8418.855 - 8469.268: 80.8451% ( 45) 00:07:34.793 8469.268 - 8519.680: 81.1245% ( 49) 00:07:34.793 8519.680 - 8570.092: 81.3698% ( 43) 00:07:34.793 8570.092 - 8620.505: 81.5807% ( 37) 00:07:34.793 8620.505 - 8670.917: 81.8146% ( 41) 00:07:34.793 8670.917 - 8721.329: 82.0997% ( 50) 00:07:34.793 8721.329 - 8771.742: 82.3848% ( 50) 00:07:34.793 8771.742 - 8822.154: 82.6471% ( 46) 00:07:34.793 8822.154 - 8872.566: 82.9494% ( 53) 00:07:34.793 8872.566 - 8922.978: 83.2516% ( 53) 00:07:34.793 8922.978 - 8973.391: 83.5481% ( 52) 00:07:34.793 8973.391 - 9023.803: 83.9017% ( 62) 00:07:34.793 9023.803 - 9074.215: 84.2609% ( 63) 00:07:34.793 9074.215 - 9124.628: 84.5518% ( 51) 00:07:34.793 9124.628 - 9175.040: 84.8711% ( 56) 00:07:34.793 9175.040 - 9225.452: 85.1962% ( 57) 00:07:34.793 9225.452 - 9275.865: 85.5668% ( 65) 00:07:34.793 9275.865 - 9326.277: 85.9546% ( 68) 00:07:34.793 9326.277 - 9376.689: 86.2682% ( 55) 00:07:34.793 9376.689 - 9427.102: 86.6218% ( 62) 00:07:34.793 9427.102 - 9477.514: 86.9469% ( 57) 00:07:34.793 9477.514 - 9527.926: 87.2605% ( 55) 00:07:34.793 9527.926 - 9578.338: 87.5399% ( 49) 00:07:34.793 9578.338 - 9628.751: 87.8136% ( 48) 00:07:34.793 9628.751 - 9679.163: 88.0703% ( 45) 00:07:34.793 9679.163 - 9729.575: 88.3155% ( 43) 00:07:34.793 9729.575 - 9779.988: 88.5493% ( 41) 00:07:34.793 9779.988 - 9830.400: 88.7546% ( 36) 00:07:34.793 9830.400 - 9880.812: 88.9542% ( 35) 00:07:34.793 9880.812 - 9931.225: 89.1309% ( 31) 00:07:34.793 9931.225 - 9981.637: 89.3248% ( 34) 00:07:34.793 9981.637 - 10032.049: 89.5073% ( 32) 00:07:34.793 10032.049 - 10082.462: 89.6955% ( 33) 00:07:34.793 10082.462 - 10132.874: 89.9122% ( 38) 00:07:34.794 10132.874 - 10183.286: 90.1346% ( 39) 00:07:34.794 10183.286 - 10233.698: 90.2885% ( 27) 00:07:34.794 10233.698 - 10284.111: 90.4425% ( 27) 00:07:34.794 10284.111 - 10334.523: 90.6250% ( 32) 00:07:34.794 10334.523 - 10384.935: 90.8132% ( 33) 00:07:34.794 10384.935 - 10435.348: 91.0242% ( 37) 00:07:34.794 10435.348 - 10485.760: 91.2295% ( 36) 00:07:34.794 10485.760 - 10536.172: 91.4005% ( 30) 00:07:34.794 10536.172 - 10586.585: 91.5716% ( 30) 00:07:34.794 10586.585 - 10636.997: 91.7370% ( 29) 00:07:34.794 10636.997 - 10687.409: 91.9195% ( 32) 00:07:34.794 10687.409 - 10737.822: 92.0906% ( 30) 00:07:34.794 10737.822 - 10788.234: 92.2844% ( 34) 00:07:34.794 10788.234 - 10838.646: 92.4669% ( 32) 00:07:34.794 10838.646 - 10889.058: 92.6779% ( 37) 00:07:34.794 10889.058 - 10939.471: 92.8889% ( 37) 00:07:34.794 10939.471 - 10989.883: 93.0657% ( 31) 00:07:34.794 10989.883 - 11040.295: 93.2311% ( 29) 00:07:34.794 11040.295 - 11090.708: 93.4193% ( 33) 00:07:34.794 11090.708 - 11141.120: 93.6131% ( 34) 00:07:34.794 11141.120 - 11191.532: 93.7842% ( 30) 00:07:34.794 11191.532 - 11241.945: 93.9439% ( 28) 00:07:34.794 11241.945 - 11292.357: 94.1036% ( 28) 00:07:34.794 11292.357 - 11342.769: 94.2176% ( 20) 00:07:34.794 11342.769 - 11393.182: 94.3431% ( 22) 00:07:34.794 11393.182 - 11443.594: 94.4628% ( 21) 00:07:34.794 11443.594 - 11494.006: 94.5940% ( 23) 00:07:34.794 11494.006 - 11544.418: 94.7308% ( 24) 00:07:34.794 11544.418 - 11594.831: 94.8677% ( 24) 00:07:34.794 11594.831 - 11645.243: 94.9818% ( 20) 00:07:34.794 11645.243 - 11695.655: 95.1015% ( 21) 00:07:34.794 11695.655 - 11746.068: 95.2042% ( 18) 00:07:34.794 11746.068 - 11796.480: 95.2897% ( 15) 00:07:34.794 11796.480 - 11846.892: 95.3866% ( 17) 00:07:34.794 11846.892 - 11897.305: 95.4722% ( 15) 00:07:34.794 11897.305 - 11947.717: 95.5292% ( 10) 00:07:34.794 11947.717 - 11998.129: 95.6432% ( 20) 00:07:34.794 11998.129 - 12048.542: 95.7231% ( 14) 00:07:34.794 12048.542 - 12098.954: 95.8200% ( 17) 00:07:34.794 12098.954 - 12149.366: 95.9284% ( 19) 00:07:34.794 12149.366 - 12199.778: 96.0310% ( 18) 00:07:34.794 12199.778 - 12250.191: 96.1394% ( 19) 00:07:34.794 12250.191 - 12300.603: 96.2477% ( 19) 00:07:34.794 12300.603 - 12351.015: 96.3447% ( 17) 00:07:34.794 12351.015 - 12401.428: 96.4473% ( 18) 00:07:34.794 12401.428 - 12451.840: 96.5557% ( 19) 00:07:34.794 12451.840 - 12502.252: 96.6469% ( 16) 00:07:34.794 12502.252 - 12552.665: 96.7724% ( 22) 00:07:34.794 12552.665 - 12603.077: 96.8636% ( 16) 00:07:34.794 12603.077 - 12653.489: 96.9491% ( 15) 00:07:34.794 12653.489 - 12703.902: 97.0347% ( 15) 00:07:34.794 12703.902 - 12754.314: 97.1145% ( 14) 00:07:34.794 12754.314 - 12804.726: 97.2000% ( 15) 00:07:34.794 12804.726 - 12855.138: 97.2685% ( 12) 00:07:34.794 12855.138 - 12905.551: 97.3312% ( 11) 00:07:34.794 12905.551 - 13006.375: 97.4510% ( 21) 00:07:34.794 13006.375 - 13107.200: 97.5650% ( 20) 00:07:34.794 13107.200 - 13208.025: 97.6334% ( 12) 00:07:34.794 13208.025 - 13308.849: 97.6962% ( 11) 00:07:34.794 13308.849 - 13409.674: 97.7589% ( 11) 00:07:34.794 13409.674 - 13510.498: 97.7931% ( 6) 00:07:34.794 13510.498 - 13611.323: 97.8045% ( 2) 00:07:34.794 13611.323 - 13712.148: 97.8102% ( 1) 00:07:34.794 13913.797 - 14014.622: 97.8501% ( 7) 00:07:34.794 14014.622 - 14115.446: 97.9300% ( 14) 00:07:34.794 14115.446 - 14216.271: 97.9471% ( 3) 00:07:34.794 14216.271 - 14317.095: 97.9813% ( 6) 00:07:34.794 14317.095 - 14417.920: 98.0440% ( 11) 00:07:34.794 14417.920 - 14518.745: 98.1296% ( 15) 00:07:34.794 14518.745 - 14619.569: 98.2037% ( 13) 00:07:34.794 14619.569 - 14720.394: 98.3234% ( 21) 00:07:34.794 14720.394 - 14821.218: 98.4318% ( 19) 00:07:34.794 14821.218 - 14922.043: 98.5059% ( 13) 00:07:34.794 14922.043 - 15022.868: 98.5744% ( 12) 00:07:34.794 15022.868 - 15123.692: 98.6428% ( 12) 00:07:34.794 15123.692 - 15224.517: 98.7055% ( 11) 00:07:34.794 15224.517 - 15325.342: 98.7797% ( 13) 00:07:34.794 15325.342 - 15426.166: 98.8367% ( 10) 00:07:34.794 15426.166 - 15526.991: 98.8880% ( 9) 00:07:34.794 15526.991 - 15627.815: 99.0021% ( 20) 00:07:34.794 15627.815 - 15728.640: 99.0192% ( 3) 00:07:34.794 15728.640 - 15829.465: 99.0648% ( 8) 00:07:34.794 15829.465 - 15930.289: 99.0990% ( 6) 00:07:34.794 15930.289 - 16031.114: 99.1275% ( 5) 00:07:34.794 16031.114 - 16131.938: 99.1617% ( 6) 00:07:34.794 16131.938 - 16232.763: 99.1959% ( 6) 00:07:34.794 16232.763 - 16333.588: 99.2359% ( 7) 00:07:34.794 16333.588 - 16434.412: 99.2644% ( 5) 00:07:34.794 16434.412 - 16535.237: 99.2701% ( 1) 00:07:34.794 17241.009 - 17341.834: 99.2758% ( 1) 00:07:34.794 17341.834 - 17442.658: 99.2986% ( 4) 00:07:34.794 17442.658 - 17543.483: 99.3214% ( 4) 00:07:34.794 17543.483 - 17644.308: 99.3442% ( 4) 00:07:34.794 17644.308 - 17745.132: 99.3670% ( 4) 00:07:34.794 17745.132 - 17845.957: 99.3955% ( 5) 00:07:34.794 17845.957 - 17946.782: 99.4183% ( 4) 00:07:34.794 17946.782 - 18047.606: 99.4354% ( 3) 00:07:34.794 18047.606 - 18148.431: 99.4583% ( 4) 00:07:34.794 18148.431 - 18249.255: 99.4811% ( 4) 00:07:34.794 18249.255 - 18350.080: 99.5039% ( 4) 00:07:34.794 18350.080 - 18450.905: 99.5324% ( 5) 00:07:34.794 18450.905 - 18551.729: 99.5552% ( 4) 00:07:34.794 18551.729 - 18652.554: 99.5780% ( 4) 00:07:34.794 18652.554 - 18753.378: 99.6008% ( 4) 00:07:34.794 18753.378 - 18854.203: 99.6236% ( 4) 00:07:34.794 18854.203 - 18955.028: 99.6350% ( 2) 00:07:34.794 21576.468 - 21677.292: 99.6464% ( 2) 00:07:34.794 21677.292 - 21778.117: 99.6693% ( 4) 00:07:34.794 21778.117 - 21878.942: 99.6921% ( 4) 00:07:34.794 21878.942 - 21979.766: 99.7149% ( 4) 00:07:34.794 21979.766 - 22080.591: 99.7377% ( 4) 00:07:34.794 22080.591 - 22181.415: 99.7605% ( 4) 00:07:34.794 22181.415 - 22282.240: 99.7833% ( 4) 00:07:34.794 22282.240 - 22383.065: 99.8061% ( 4) 00:07:34.794 22383.065 - 22483.889: 99.8289% ( 4) 00:07:34.794 22483.889 - 22584.714: 99.8574% ( 5) 00:07:34.794 22584.714 - 22685.538: 99.8802% ( 4) 00:07:34.794 22685.538 - 22786.363: 99.9031% ( 4) 00:07:34.794 22786.363 - 22887.188: 99.9259% ( 4) 00:07:34.794 22887.188 - 22988.012: 99.9487% ( 4) 00:07:34.794 22988.012 - 23088.837: 99.9772% ( 5) 00:07:34.794 23088.837 - 23189.662: 100.0000% ( 4) 00:07:34.794 00:07:34.794 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:34.794 ============================================================================== 00:07:34.794 Range in us Cumulative IO count 00:07:34.794 5142.055 - 5167.262: 0.0057% ( 1) 00:07:34.794 5167.262 - 5192.468: 0.0284% ( 4) 00:07:34.794 5192.468 - 5217.674: 0.0341% ( 1) 00:07:34.794 5217.674 - 5242.880: 0.0625% ( 5) 00:07:34.794 5242.880 - 5268.086: 0.1534% ( 16) 00:07:34.794 5268.086 - 5293.292: 0.2898% ( 24) 00:07:34.794 5293.292 - 5318.498: 0.3920% ( 18) 00:07:34.794 5318.498 - 5343.705: 0.4886% ( 17) 00:07:34.794 5343.705 - 5368.911: 0.5852% ( 17) 00:07:34.794 5368.911 - 5394.117: 0.7727% ( 33) 00:07:34.794 5394.117 - 5419.323: 0.9205% ( 26) 00:07:34.794 5419.323 - 5444.529: 1.0511% ( 23) 00:07:34.794 5444.529 - 5469.735: 1.1932% ( 25) 00:07:34.794 5469.735 - 5494.942: 1.3466% ( 27) 00:07:34.794 5494.942 - 5520.148: 1.4773% ( 23) 00:07:34.794 5520.148 - 5545.354: 1.6307% ( 27) 00:07:34.794 5545.354 - 5570.560: 1.8068% ( 31) 00:07:34.794 5570.560 - 5595.766: 2.0511% ( 43) 00:07:34.794 5595.766 - 5620.972: 2.4091% ( 63) 00:07:34.794 5620.972 - 5646.178: 2.9261% ( 91) 00:07:34.794 5646.178 - 5671.385: 3.7841% ( 151) 00:07:34.794 5671.385 - 5696.591: 4.6080% ( 145) 00:07:34.794 5696.591 - 5721.797: 5.3977% ( 139) 00:07:34.794 5721.797 - 5747.003: 6.5227% ( 198) 00:07:34.794 5747.003 - 5772.209: 7.7670% ( 219) 00:07:34.794 5772.209 - 5797.415: 9.0682% ( 229) 00:07:34.794 5797.415 - 5822.622: 10.3523% ( 226) 00:07:34.794 5822.622 - 5847.828: 11.6307% ( 225) 00:07:34.794 5847.828 - 5873.034: 13.0625% ( 252) 00:07:34.794 5873.034 - 5898.240: 14.3182% ( 221) 00:07:34.794 5898.240 - 5923.446: 15.6875% ( 241) 00:07:34.794 5923.446 - 5948.652: 17.1023% ( 249) 00:07:34.794 5948.652 - 5973.858: 18.7955% ( 298) 00:07:34.794 5973.858 - 5999.065: 20.5341% ( 306) 00:07:34.794 5999.065 - 6024.271: 22.2216% ( 297) 00:07:34.794 6024.271 - 6049.477: 23.9375% ( 302) 00:07:34.794 6049.477 - 6074.683: 25.6364% ( 299) 00:07:34.794 6074.683 - 6099.889: 27.3182% ( 296) 00:07:34.794 6099.889 - 6125.095: 28.9602% ( 289) 00:07:34.794 6125.095 - 6150.302: 30.7330% ( 312) 00:07:34.794 6150.302 - 6175.508: 32.4602% ( 304) 00:07:34.794 6175.508 - 6200.714: 34.1307% ( 294) 00:07:34.794 6200.714 - 6225.920: 35.7784% ( 290) 00:07:34.794 6225.920 - 6251.126: 37.4886% ( 301) 00:07:34.794 6251.126 - 6276.332: 39.1932% ( 300) 00:07:34.794 6276.332 - 6301.538: 40.8750% ( 296) 00:07:34.794 6301.538 - 6326.745: 42.6761% ( 317) 00:07:34.794 6326.745 - 6351.951: 44.4432% ( 311) 00:07:34.794 6351.951 - 6377.157: 46.2898% ( 325) 00:07:34.794 6377.157 - 6402.363: 48.0568% ( 311) 00:07:34.794 6402.363 - 6427.569: 49.7159% ( 292) 00:07:34.794 6427.569 - 6452.775: 51.2841% ( 276) 00:07:34.794 6452.775 - 6503.188: 54.1705% ( 508) 00:07:34.794 6503.188 - 6553.600: 56.6080% ( 429) 00:07:34.794 6553.600 - 6604.012: 58.7955% ( 385) 00:07:34.794 6604.012 - 6654.425: 60.7614% ( 346) 00:07:34.794 6654.425 - 6704.837: 62.3693% ( 283) 00:07:34.794 6704.837 - 6755.249: 63.7898% ( 250) 00:07:34.794 6755.249 - 6805.662: 65.1364% ( 237) 00:07:34.794 6805.662 - 6856.074: 66.2443% ( 195) 00:07:34.794 6856.074 - 6906.486: 67.0227% ( 137) 00:07:34.794 6906.486 - 6956.898: 67.7273% ( 124) 00:07:34.794 6956.898 - 7007.311: 68.4375% ( 125) 00:07:34.794 7007.311 - 7057.723: 69.0966% ( 116) 00:07:34.794 7057.723 - 7108.135: 69.8352% ( 130) 00:07:34.794 7108.135 - 7158.548: 70.4375% ( 106) 00:07:34.794 7158.548 - 7208.960: 70.9659% ( 93) 00:07:34.794 7208.960 - 7259.372: 71.5455% ( 102) 00:07:34.794 7259.372 - 7309.785: 72.1875% ( 113) 00:07:34.794 7309.785 - 7360.197: 72.7557% ( 100) 00:07:34.794 7360.197 - 7410.609: 73.2898% ( 94) 00:07:34.794 7410.609 - 7461.022: 73.8239% ( 94) 00:07:34.794 7461.022 - 7511.434: 74.3239% ( 88) 00:07:34.795 7511.434 - 7561.846: 74.8239% ( 88) 00:07:34.795 7561.846 - 7612.258: 75.3466% ( 92) 00:07:34.795 7612.258 - 7662.671: 75.7898% ( 78) 00:07:34.795 7662.671 - 7713.083: 76.2045% ( 73) 00:07:34.795 7713.083 - 7763.495: 76.6136% ( 72) 00:07:34.795 7763.495 - 7813.908: 77.0170% ( 71) 00:07:34.795 7813.908 - 7864.320: 77.4091% ( 69) 00:07:34.795 7864.320 - 7914.732: 77.7841% ( 66) 00:07:34.795 7914.732 - 7965.145: 78.1648% ( 67) 00:07:34.795 7965.145 - 8015.557: 78.5227% ( 63) 00:07:34.795 8015.557 - 8065.969: 78.8352% ( 55) 00:07:34.795 8065.969 - 8116.382: 79.2386% ( 71) 00:07:34.795 8116.382 - 8166.794: 79.5000% ( 46) 00:07:34.795 8166.794 - 8217.206: 79.7443% ( 43) 00:07:34.795 8217.206 - 8267.618: 79.9773% ( 41) 00:07:34.795 8267.618 - 8318.031: 80.2102% ( 41) 00:07:34.795 8318.031 - 8368.443: 80.4545% ( 43) 00:07:34.795 8368.443 - 8418.855: 80.6761% ( 39) 00:07:34.795 8418.855 - 8469.268: 80.8977% ( 39) 00:07:34.795 8469.268 - 8519.680: 81.0909% ( 34) 00:07:34.795 8519.680 - 8570.092: 81.2898% ( 35) 00:07:34.795 8570.092 - 8620.505: 81.5170% ( 40) 00:07:34.795 8620.505 - 8670.917: 81.7784% ( 46) 00:07:34.795 8670.917 - 8721.329: 82.0341% ( 45) 00:07:34.795 8721.329 - 8771.742: 82.3295% ( 52) 00:07:34.795 8771.742 - 8822.154: 82.6705% ( 60) 00:07:34.795 8822.154 - 8872.566: 82.9886% ( 56) 00:07:34.795 8872.566 - 8922.978: 83.3068% ( 56) 00:07:34.795 8922.978 - 8973.391: 83.6080% ( 53) 00:07:34.795 8973.391 - 9023.803: 83.9205% ( 55) 00:07:34.795 9023.803 - 9074.215: 84.2557% ( 59) 00:07:34.795 9074.215 - 9124.628: 84.5739% ( 56) 00:07:34.795 9124.628 - 9175.040: 84.9261% ( 62) 00:07:34.795 9175.040 - 9225.452: 85.2955% ( 65) 00:07:34.795 9225.452 - 9275.865: 85.6648% ( 65) 00:07:34.795 9275.865 - 9326.277: 85.9489% ( 50) 00:07:34.795 9326.277 - 9376.689: 86.2670% ( 56) 00:07:34.795 9376.689 - 9427.102: 86.5739% ( 54) 00:07:34.795 9427.102 - 9477.514: 86.8864% ( 55) 00:07:34.795 9477.514 - 9527.926: 87.2330% ( 61) 00:07:34.795 9527.926 - 9578.338: 87.5114% ( 49) 00:07:34.795 9578.338 - 9628.751: 87.8011% ( 51) 00:07:34.795 9628.751 - 9679.163: 88.0568% ( 45) 00:07:34.795 9679.163 - 9729.575: 88.3011% ( 43) 00:07:34.795 9729.575 - 9779.988: 88.6080% ( 54) 00:07:34.795 9779.988 - 9830.400: 88.8523% ( 43) 00:07:34.795 9830.400 - 9880.812: 89.0625% ( 37) 00:07:34.795 9880.812 - 9931.225: 89.2727% ( 37) 00:07:34.795 9931.225 - 9981.637: 89.4716% ( 35) 00:07:34.795 9981.637 - 10032.049: 89.6875% ( 38) 00:07:34.795 10032.049 - 10082.462: 89.8864% ( 35) 00:07:34.795 10082.462 - 10132.874: 90.0625% ( 31) 00:07:34.795 10132.874 - 10183.286: 90.2614% ( 35) 00:07:34.795 10183.286 - 10233.698: 90.4375% ( 31) 00:07:34.795 10233.698 - 10284.111: 90.6023% ( 29) 00:07:34.795 10284.111 - 10334.523: 90.7727% ( 30) 00:07:34.795 10334.523 - 10384.935: 90.9091% ( 24) 00:07:34.795 10384.935 - 10435.348: 91.0739% ( 29) 00:07:34.795 10435.348 - 10485.760: 91.1875% ( 20) 00:07:34.795 10485.760 - 10536.172: 91.3239% ( 24) 00:07:34.795 10536.172 - 10586.585: 91.4432% ( 21) 00:07:34.795 10586.585 - 10636.997: 91.5795% ( 24) 00:07:34.795 10636.997 - 10687.409: 91.7386% ( 28) 00:07:34.795 10687.409 - 10737.822: 91.8920% ( 27) 00:07:34.795 10737.822 - 10788.234: 92.0455% ( 27) 00:07:34.795 10788.234 - 10838.646: 92.2159% ( 30) 00:07:34.795 10838.646 - 10889.058: 92.3920% ( 31) 00:07:34.795 10889.058 - 10939.471: 92.5568% ( 29) 00:07:34.795 10939.471 - 10989.883: 92.7045% ( 26) 00:07:34.795 10989.883 - 11040.295: 92.8693% ( 29) 00:07:34.795 11040.295 - 11090.708: 93.0739% ( 36) 00:07:34.795 11090.708 - 11141.120: 93.2500% ( 31) 00:07:34.795 11141.120 - 11191.532: 93.4375% ( 33) 00:07:34.795 11191.532 - 11241.945: 93.6023% ( 29) 00:07:34.795 11241.945 - 11292.357: 93.7557% ( 27) 00:07:34.795 11292.357 - 11342.769: 93.9318% ( 31) 00:07:34.795 11342.769 - 11393.182: 94.0852% ( 27) 00:07:34.795 11393.182 - 11443.594: 94.2273% ( 25) 00:07:34.795 11443.594 - 11494.006: 94.3750% ( 26) 00:07:34.795 11494.006 - 11544.418: 94.5227% ( 26) 00:07:34.795 11544.418 - 11594.831: 94.6818% ( 28) 00:07:34.795 11594.831 - 11645.243: 94.8182% ( 24) 00:07:34.795 11645.243 - 11695.655: 94.9545% ( 24) 00:07:34.795 11695.655 - 11746.068: 95.0909% ( 24) 00:07:34.795 11746.068 - 11796.480: 95.2159% ( 22) 00:07:34.795 11796.480 - 11846.892: 95.3295% ( 20) 00:07:34.795 11846.892 - 11897.305: 95.4261% ( 17) 00:07:34.795 11897.305 - 11947.717: 95.5284% ( 18) 00:07:34.795 11947.717 - 11998.129: 95.6364% ( 19) 00:07:34.795 11998.129 - 12048.542: 95.7500% ( 20) 00:07:34.795 12048.542 - 12098.954: 95.8750% ( 22) 00:07:34.795 12098.954 - 12149.366: 96.0114% ( 24) 00:07:34.795 12149.366 - 12199.778: 96.1136% ( 18) 00:07:34.795 12199.778 - 12250.191: 96.2273% ( 20) 00:07:34.795 12250.191 - 12300.603: 96.3295% ( 18) 00:07:34.795 12300.603 - 12351.015: 96.4318% ( 18) 00:07:34.795 12351.015 - 12401.428: 96.5341% ( 18) 00:07:34.795 12401.428 - 12451.840: 96.6591% ( 22) 00:07:34.795 12451.840 - 12502.252: 96.7898% ( 23) 00:07:34.795 12502.252 - 12552.665: 96.8920% ( 18) 00:07:34.795 12552.665 - 12603.077: 96.9545% ( 11) 00:07:34.795 12603.077 - 12653.489: 97.0227% ( 12) 00:07:34.795 12653.489 - 12703.902: 97.0909% ( 12) 00:07:34.795 12703.902 - 12754.314: 97.1761% ( 15) 00:07:34.795 12754.314 - 12804.726: 97.2670% ( 16) 00:07:34.795 12804.726 - 12855.138: 97.3409% ( 13) 00:07:34.795 12855.138 - 12905.551: 97.4148% ( 13) 00:07:34.795 12905.551 - 13006.375: 97.5341% ( 21) 00:07:34.795 13006.375 - 13107.200: 97.6534% ( 21) 00:07:34.795 13107.200 - 13208.025: 97.7784% ( 22) 00:07:34.795 13208.025 - 13308.849: 97.8864% ( 19) 00:07:34.795 13308.849 - 13409.674: 97.9886% ( 18) 00:07:34.795 13409.674 - 13510.498: 98.0625% ( 13) 00:07:34.795 13510.498 - 13611.323: 98.1420% ( 14) 00:07:34.795 13611.323 - 13712.148: 98.2216% ( 14) 00:07:34.795 13712.148 - 13812.972: 98.2727% ( 9) 00:07:34.795 13812.972 - 13913.797: 98.3125% ( 7) 00:07:34.795 13913.797 - 14014.622: 98.3466% ( 6) 00:07:34.795 14014.622 - 14115.446: 98.3807% ( 6) 00:07:34.795 14115.446 - 14216.271: 98.4148% ( 6) 00:07:34.795 14216.271 - 14317.095: 98.4489% ( 6) 00:07:34.795 14317.095 - 14417.920: 98.4830% ( 6) 00:07:34.795 14417.920 - 14518.745: 98.5170% ( 6) 00:07:34.795 14518.745 - 14619.569: 98.5455% ( 5) 00:07:34.795 14720.394 - 14821.218: 98.5739% ( 5) 00:07:34.795 14821.218 - 14922.043: 98.6477% ( 13) 00:07:34.795 14922.043 - 15022.868: 98.7784% ( 23) 00:07:34.795 15022.868 - 15123.692: 98.9034% ( 22) 00:07:34.795 15123.692 - 15224.517: 99.0284% ( 22) 00:07:34.795 15224.517 - 15325.342: 99.1364% ( 19) 00:07:34.795 15325.342 - 15426.166: 99.2500% ( 20) 00:07:34.795 15426.166 - 15526.991: 99.3466% ( 17) 00:07:34.795 15526.991 - 15627.815: 99.4261% ( 14) 00:07:34.795 15627.815 - 15728.640: 99.5284% ( 18) 00:07:34.795 15728.640 - 15829.465: 99.5966% ( 12) 00:07:34.795 15829.465 - 15930.289: 99.6307% ( 6) 00:07:34.795 15930.289 - 16031.114: 99.6364% ( 1) 00:07:34.795 16636.062 - 16736.886: 99.6534% ( 3) 00:07:34.795 16736.886 - 16837.711: 99.6761% ( 4) 00:07:34.795 16837.711 - 16938.535: 99.6989% ( 4) 00:07:34.795 16938.535 - 17039.360: 99.7273% ( 5) 00:07:34.795 17039.360 - 17140.185: 99.7500% ( 4) 00:07:34.795 17140.185 - 17241.009: 99.7727% ( 4) 00:07:34.795 17241.009 - 17341.834: 99.7955% ( 4) 00:07:34.795 17341.834 - 17442.658: 99.8182% ( 4) 00:07:34.795 17442.658 - 17543.483: 99.8409% ( 4) 00:07:34.795 17543.483 - 17644.308: 99.8693% ( 5) 00:07:34.795 17644.308 - 17745.132: 99.8920% ( 4) 00:07:34.795 17745.132 - 17845.957: 99.9148% ( 4) 00:07:34.795 17845.957 - 17946.782: 99.9375% ( 4) 00:07:34.795 17946.782 - 18047.606: 99.9602% ( 4) 00:07:34.795 18047.606 - 18148.431: 99.9830% ( 4) 00:07:34.795 18148.431 - 18249.255: 100.0000% ( 3) 00:07:34.795 00:07:34.795 09:13:26 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:36.203 Initializing NVMe Controllers 00:07:36.203 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:36.203 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:36.203 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:36.203 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:36.203 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:36.203 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:36.203 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:36.203 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:36.203 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:36.203 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:36.203 Initialization complete. Launching workers. 00:07:36.203 ======================================================== 00:07:36.203 Latency(us) 00:07:36.203 Device Information : IOPS MiB/s Average min max 00:07:36.203 PCIE (0000:00:10.0) NSID 1 from core 0: 18238.09 213.73 7027.82 5023.20 32228.68 00:07:36.203 PCIE (0000:00:11.0) NSID 1 from core 0: 18238.09 213.73 7017.11 5118.06 30375.92 00:07:36.203 PCIE (0000:00:13.0) NSID 1 from core 0: 18238.09 213.73 7006.49 5071.69 28797.15 00:07:36.203 PCIE (0000:00:12.0) NSID 1 from core 0: 18238.09 213.73 6995.71 5136.23 26998.54 00:07:36.203 PCIE (0000:00:12.0) NSID 2 from core 0: 18238.09 213.73 6984.96 5091.21 25230.91 00:07:36.203 PCIE (0000:00:12.0) NSID 3 from core 0: 18302.08 214.48 6949.82 5130.98 20272.04 00:07:36.203 ======================================================== 00:07:36.203 Total : 109492.50 1283.12 6996.96 5023.20 32228.68 00:07:36.203 00:07:36.203 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:36.203 ================================================================================= 00:07:36.203 1.00000% : 5444.529us 00:07:36.203 10.00000% : 5973.858us 00:07:36.203 25.00000% : 6326.745us 00:07:36.203 50.00000% : 6755.249us 00:07:36.203 75.00000% : 7158.548us 00:07:36.203 90.00000% : 7965.145us 00:07:36.203 95.00000% : 8822.154us 00:07:36.203 98.00000% : 11040.295us 00:07:36.203 99.00000% : 12098.954us 00:07:36.203 99.50000% : 26617.698us 00:07:36.203 99.90000% : 31860.578us 00:07:36.203 99.99000% : 32263.877us 00:07:36.203 99.99900% : 32263.877us 00:07:36.203 99.99990% : 32263.877us 00:07:36.203 99.99999% : 32263.877us 00:07:36.203 00:07:36.203 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:36.203 ================================================================================= 00:07:36.203 1.00000% : 5520.148us 00:07:36.203 10.00000% : 5999.065us 00:07:36.203 25.00000% : 6351.951us 00:07:36.203 50.00000% : 6805.662us 00:07:36.203 75.00000% : 7108.135us 00:07:36.203 90.00000% : 7864.320us 00:07:36.203 95.00000% : 8822.154us 00:07:36.203 98.00000% : 11191.532us 00:07:36.203 99.00000% : 13006.375us 00:07:36.203 99.50000% : 25105.329us 00:07:36.203 99.90000% : 30045.735us 00:07:36.203 99.99000% : 30449.034us 00:07:36.203 99.99900% : 30449.034us 00:07:36.203 99.99990% : 30449.034us 00:07:36.203 99.99999% : 30449.034us 00:07:36.203 00:07:36.203 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:36.203 ================================================================================= 00:07:36.203 1.00000% : 5469.735us 00:07:36.203 10.00000% : 5973.858us 00:07:36.203 25.00000% : 6351.951us 00:07:36.203 50.00000% : 6805.662us 00:07:36.203 75.00000% : 7108.135us 00:07:36.203 90.00000% : 7914.732us 00:07:36.203 95.00000% : 8822.154us 00:07:36.203 98.00000% : 10636.997us 00:07:36.203 99.00000% : 13208.025us 00:07:36.203 99.50000% : 23592.960us 00:07:36.203 99.90000% : 28432.542us 00:07:36.203 99.99000% : 28835.840us 00:07:36.203 99.99900% : 28835.840us 00:07:36.203 99.99990% : 28835.840us 00:07:36.203 99.99999% : 28835.840us 00:07:36.203 00:07:36.203 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:36.203 ================================================================================= 00:07:36.203 1.00000% : 5469.735us 00:07:36.203 10.00000% : 5999.065us 00:07:36.203 25.00000% : 6351.951us 00:07:36.203 50.00000% : 6755.249us 00:07:36.203 75.00000% : 7108.135us 00:07:36.203 90.00000% : 7914.732us 00:07:36.203 95.00000% : 8822.154us 00:07:36.203 98.00000% : 10586.585us 00:07:36.203 99.00000% : 13208.025us 00:07:36.203 99.50000% : 21878.942us 00:07:36.203 99.90000% : 26617.698us 00:07:36.203 99.99000% : 27020.997us 00:07:36.203 99.99900% : 27020.997us 00:07:36.203 99.99990% : 27020.997us 00:07:36.203 99.99999% : 27020.997us 00:07:36.203 00:07:36.203 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:36.203 ================================================================================= 00:07:36.203 1.00000% : 5494.942us 00:07:36.203 10.00000% : 5999.065us 00:07:36.203 25.00000% : 6351.951us 00:07:36.203 50.00000% : 6755.249us 00:07:36.203 75.00000% : 7108.135us 00:07:36.203 90.00000% : 7965.145us 00:07:36.203 95.00000% : 8872.566us 00:07:36.203 98.00000% : 10838.646us 00:07:36.203 99.00000% : 12502.252us 00:07:36.203 99.50000% : 20164.923us 00:07:36.203 99.90000% : 24903.680us 00:07:36.203 99.99000% : 25306.978us 00:07:36.203 99.99900% : 25306.978us 00:07:36.203 99.99990% : 25306.978us 00:07:36.203 99.99999% : 25306.978us 00:07:36.203 00:07:36.203 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:36.203 ================================================================================= 00:07:36.203 1.00000% : 5494.942us 00:07:36.203 10.00000% : 5999.065us 00:07:36.203 25.00000% : 6351.951us 00:07:36.203 50.00000% : 6805.662us 00:07:36.203 75.00000% : 7108.135us 00:07:36.203 90.00000% : 8015.557us 00:07:36.203 95.00000% : 8872.566us 00:07:36.203 98.00000% : 11393.182us 00:07:36.203 99.00000% : 11998.129us 00:07:36.203 99.50000% : 14317.095us 00:07:36.203 99.90000% : 19862.449us 00:07:36.203 99.99000% : 20265.748us 00:07:36.203 99.99900% : 20366.572us 00:07:36.203 99.99990% : 20366.572us 00:07:36.203 99.99999% : 20366.572us 00:07:36.203 00:07:36.203 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:36.203 ============================================================================== 00:07:36.203 Range in us Cumulative IO count 00:07:36.203 5016.025 - 5041.231: 0.0110% ( 2) 00:07:36.203 5041.231 - 5066.437: 0.0274% ( 3) 00:07:36.203 5066.437 - 5091.643: 0.0548% ( 5) 00:07:36.203 5091.643 - 5116.849: 0.0713% ( 3) 00:07:36.203 5116.849 - 5142.055: 0.0987% ( 5) 00:07:36.203 5142.055 - 5167.262: 0.1206% ( 4) 00:07:36.203 5167.262 - 5192.468: 0.1425% ( 4) 00:07:36.203 5192.468 - 5217.674: 0.1754% ( 6) 00:07:36.203 5217.674 - 5242.880: 0.2029% ( 5) 00:07:36.203 5242.880 - 5268.086: 0.2357% ( 6) 00:07:36.203 5268.086 - 5293.292: 0.3399% ( 19) 00:07:36.203 5293.292 - 5318.498: 0.4002% ( 11) 00:07:36.203 5318.498 - 5343.705: 0.4989% ( 18) 00:07:36.203 5343.705 - 5368.911: 0.5537% ( 10) 00:07:36.203 5368.911 - 5394.117: 0.6579% ( 19) 00:07:36.203 5394.117 - 5419.323: 0.8498% ( 35) 00:07:36.203 5419.323 - 5444.529: 1.0417% ( 35) 00:07:36.203 5444.529 - 5469.735: 1.2939% ( 46) 00:07:36.203 5469.735 - 5494.942: 1.5241% ( 42) 00:07:36.203 5494.942 - 5520.148: 1.7873% ( 48) 00:07:36.203 5520.148 - 5545.354: 2.1217% ( 61) 00:07:36.203 5545.354 - 5570.560: 2.7138% ( 108) 00:07:36.203 5570.560 - 5595.766: 3.1524% ( 80) 00:07:36.203 5595.766 - 5620.972: 3.5088% ( 65) 00:07:36.203 5620.972 - 5646.178: 4.0022% ( 90) 00:07:36.203 5646.178 - 5671.385: 4.4737% ( 86) 00:07:36.203 5671.385 - 5696.591: 4.8684% ( 72) 00:07:36.203 5696.591 - 5721.797: 5.3838% ( 94) 00:07:36.203 5721.797 - 5747.003: 5.9320% ( 100) 00:07:36.203 5747.003 - 5772.209: 6.4857% ( 101) 00:07:36.203 5772.209 - 5797.415: 6.9463% ( 84) 00:07:36.203 5797.415 - 5822.622: 7.4561% ( 93) 00:07:36.203 5822.622 - 5847.828: 7.8893% ( 79) 00:07:36.203 5847.828 - 5873.034: 8.4101% ( 95) 00:07:36.203 5873.034 - 5898.240: 8.9474% ( 98) 00:07:36.203 5898.240 - 5923.446: 9.4572% ( 93) 00:07:36.203 5923.446 - 5948.652: 9.9397% ( 88) 00:07:36.203 5948.652 - 5973.858: 10.4715% ( 97) 00:07:36.203 5973.858 - 5999.065: 11.0307% ( 102) 00:07:36.203 5999.065 - 6024.271: 11.9024% ( 159) 00:07:36.203 6024.271 - 6049.477: 12.7193% ( 149) 00:07:36.203 6049.477 - 6074.683: 13.6404% ( 168) 00:07:36.203 6074.683 - 6099.889: 14.6711% ( 188) 00:07:36.203 6099.889 - 6125.095: 15.6798% ( 184) 00:07:36.203 6125.095 - 6150.302: 16.8640% ( 216) 00:07:36.203 6150.302 - 6175.508: 18.0154% ( 210) 00:07:36.203 6175.508 - 6200.714: 19.1667% ( 210) 00:07:36.203 6200.714 - 6225.920: 20.2083% ( 190) 00:07:36.203 6225.920 - 6251.126: 21.3651% ( 211) 00:07:36.203 6251.126 - 6276.332: 22.4726% ( 202) 00:07:36.203 6276.332 - 6301.538: 23.7171% ( 227) 00:07:36.203 6301.538 - 6326.745: 25.0493% ( 243) 00:07:36.203 6326.745 - 6351.951: 26.2774% ( 224) 00:07:36.203 6351.951 - 6377.157: 27.4123% ( 207) 00:07:36.203 6377.157 - 6402.363: 28.8871% ( 269) 00:07:36.203 6402.363 - 6427.569: 30.3728% ( 271) 00:07:36.203 6427.569 - 6452.775: 31.8969% ( 278) 00:07:36.203 6452.775 - 6503.188: 34.3695% ( 451) 00:07:36.203 6503.188 - 6553.600: 37.1601% ( 509) 00:07:36.203 6553.600 - 6604.012: 41.0033% ( 701) 00:07:36.203 6604.012 - 6654.425: 45.1261% ( 752) 00:07:36.203 6654.425 - 6704.837: 49.9232% ( 875) 00:07:36.203 6704.837 - 6755.249: 53.7664% ( 701) 00:07:36.203 6755.249 - 6805.662: 57.4123% ( 665) 00:07:36.203 6805.662 - 6856.074: 60.4825% ( 560) 00:07:36.203 6856.074 - 6906.486: 63.4868% ( 548) 00:07:36.204 6906.486 - 6956.898: 65.8388% ( 429) 00:07:36.204 6956.898 - 7007.311: 68.3827% ( 464) 00:07:36.204 7007.311 - 7057.723: 70.9375% ( 466) 00:07:36.204 7057.723 - 7108.135: 73.2346% ( 419) 00:07:36.204 7108.135 - 7158.548: 75.3070% ( 378) 00:07:36.204 7158.548 - 7208.960: 77.2204% ( 349) 00:07:36.204 7208.960 - 7259.372: 79.1447% ( 351) 00:07:36.204 7259.372 - 7309.785: 80.8607% ( 313) 00:07:36.204 7309.785 - 7360.197: 82.4890% ( 297) 00:07:36.204 7360.197 - 7410.609: 83.6952% ( 220) 00:07:36.204 7410.609 - 7461.022: 84.6546% ( 175) 00:07:36.204 7461.022 - 7511.434: 85.4715% ( 149) 00:07:36.204 7511.434 - 7561.846: 86.1787% ( 129) 00:07:36.204 7561.846 - 7612.258: 86.7379% ( 102) 00:07:36.204 7612.258 - 7662.671: 87.1820% ( 81) 00:07:36.204 7662.671 - 7713.083: 87.6864% ( 92) 00:07:36.204 7713.083 - 7763.495: 88.3553% ( 122) 00:07:36.204 7763.495 - 7813.908: 88.8542% ( 91) 00:07:36.204 7813.908 - 7864.320: 89.3092% ( 83) 00:07:36.204 7864.320 - 7914.732: 89.7039% ( 72) 00:07:36.204 7914.732 - 7965.145: 90.0548% ( 64) 00:07:36.204 7965.145 - 8015.557: 90.4660% ( 75) 00:07:36.204 8015.557 - 8065.969: 90.7895% ( 59) 00:07:36.204 8065.969 - 8116.382: 91.0746% ( 52) 00:07:36.204 8116.382 - 8166.794: 91.5461% ( 86) 00:07:36.204 8166.794 - 8217.206: 92.0450% ( 91) 00:07:36.204 8217.206 - 8267.618: 92.3136% ( 49) 00:07:36.204 8267.618 - 8318.031: 92.6919% ( 69) 00:07:36.204 8318.031 - 8368.443: 92.9825% ( 53) 00:07:36.204 8368.443 - 8418.855: 93.3279% ( 63) 00:07:36.204 8418.855 - 8469.268: 93.5252% ( 36) 00:07:36.204 8469.268 - 8519.680: 93.7116% ( 34) 00:07:36.204 8519.680 - 8570.092: 93.8706% ( 29) 00:07:36.204 8570.092 - 8620.505: 94.1118% ( 44) 00:07:36.204 8620.505 - 8670.917: 94.3037% ( 35) 00:07:36.204 8670.917 - 8721.329: 94.5230% ( 40) 00:07:36.204 8721.329 - 8771.742: 94.7971% ( 50) 00:07:36.204 8771.742 - 8822.154: 95.1590% ( 66) 00:07:36.204 8822.154 - 8872.566: 95.3564% ( 36) 00:07:36.204 8872.566 - 8922.978: 95.5154% ( 29) 00:07:36.204 8922.978 - 8973.391: 95.6414% ( 23) 00:07:36.204 8973.391 - 9023.803: 95.7785% ( 25) 00:07:36.204 9023.803 - 9074.215: 95.9046% ( 23) 00:07:36.204 9074.215 - 9124.628: 96.0197% ( 21) 00:07:36.204 9124.628 - 9175.040: 96.1129% ( 17) 00:07:36.204 9175.040 - 9225.452: 96.1952% ( 15) 00:07:36.204 9225.452 - 9275.865: 96.2390% ( 8) 00:07:36.204 9275.865 - 9326.277: 96.3103% ( 13) 00:07:36.204 9326.277 - 9376.689: 96.3871% ( 14) 00:07:36.204 9376.689 - 9427.102: 96.4583% ( 13) 00:07:36.204 9427.102 - 9477.514: 96.5241% ( 12) 00:07:36.204 9477.514 - 9527.926: 96.5899% ( 12) 00:07:36.204 9527.926 - 9578.338: 96.6283% ( 7) 00:07:36.204 9578.338 - 9628.751: 96.6941% ( 12) 00:07:36.204 9628.751 - 9679.163: 96.7654% ( 13) 00:07:36.204 9679.163 - 9729.575: 96.8640% ( 18) 00:07:36.204 9729.575 - 9779.988: 96.9134% ( 9) 00:07:36.204 9779.988 - 9830.400: 96.9408% ( 5) 00:07:36.204 9830.400 - 9880.812: 97.0011% ( 11) 00:07:36.204 9880.812 - 9931.225: 97.0504% ( 9) 00:07:36.204 9931.225 - 9981.637: 97.0998% ( 9) 00:07:36.204 9981.637 - 10032.049: 97.1327% ( 6) 00:07:36.204 10032.049 - 10082.462: 97.1820% ( 9) 00:07:36.204 10082.462 - 10132.874: 97.2204% ( 7) 00:07:36.204 10132.874 - 10183.286: 97.2807% ( 11) 00:07:36.204 10183.286 - 10233.698: 97.3520% ( 13) 00:07:36.204 10233.698 - 10284.111: 97.4068% ( 10) 00:07:36.204 10284.111 - 10334.523: 97.4507% ( 8) 00:07:36.204 10334.523 - 10384.935: 97.4945% ( 8) 00:07:36.204 10384.935 - 10435.348: 97.5274% ( 6) 00:07:36.204 10435.348 - 10485.760: 97.5822% ( 10) 00:07:36.204 10485.760 - 10536.172: 97.6425% ( 11) 00:07:36.204 10536.172 - 10586.585: 97.6974% ( 10) 00:07:36.204 10586.585 - 10636.997: 97.7357% ( 7) 00:07:36.204 10636.997 - 10687.409: 97.7796% ( 8) 00:07:36.204 10687.409 - 10737.822: 97.8070% ( 5) 00:07:36.204 10737.822 - 10788.234: 97.8454% ( 7) 00:07:36.204 10788.234 - 10838.646: 97.8673% ( 4) 00:07:36.204 10838.646 - 10889.058: 97.9002% ( 6) 00:07:36.204 10889.058 - 10939.471: 97.9386% ( 7) 00:07:36.204 10939.471 - 10989.883: 97.9825% ( 8) 00:07:36.204 10989.883 - 11040.295: 98.0154% ( 6) 00:07:36.204 11040.295 - 11090.708: 98.0647% ( 9) 00:07:36.204 11090.708 - 11141.120: 98.1305% ( 12) 00:07:36.204 11141.120 - 11191.532: 98.1798% ( 9) 00:07:36.204 11191.532 - 11241.945: 98.2182% ( 7) 00:07:36.204 11241.945 - 11292.357: 98.2511% ( 6) 00:07:36.204 11292.357 - 11342.769: 98.2950% ( 8) 00:07:36.204 11342.769 - 11393.182: 98.3498% ( 10) 00:07:36.204 11393.182 - 11443.594: 98.4046% ( 10) 00:07:36.204 11443.594 - 11494.006: 98.4485% ( 8) 00:07:36.204 11494.006 - 11544.418: 98.4814% ( 6) 00:07:36.204 11544.418 - 11594.831: 98.5362% ( 10) 00:07:36.204 11594.831 - 11645.243: 98.5746% ( 7) 00:07:36.204 11645.243 - 11695.655: 98.6294% ( 10) 00:07:36.204 11695.655 - 11746.068: 98.6842% ( 10) 00:07:36.204 11746.068 - 11796.480: 98.7336% ( 9) 00:07:36.204 11796.480 - 11846.892: 98.7939% ( 11) 00:07:36.204 11846.892 - 11897.305: 98.8322% ( 7) 00:07:36.204 11897.305 - 11947.717: 98.8706% ( 7) 00:07:36.204 11947.717 - 11998.129: 98.9200% ( 9) 00:07:36.204 11998.129 - 12048.542: 98.9693% ( 9) 00:07:36.204 12048.542 - 12098.954: 99.0186% ( 9) 00:07:36.204 12098.954 - 12149.366: 99.0789% ( 11) 00:07:36.204 12149.366 - 12199.778: 99.0954% ( 3) 00:07:36.204 12199.778 - 12250.191: 99.1173% ( 4) 00:07:36.204 12250.191 - 12300.603: 99.1283% ( 2) 00:07:36.204 12300.603 - 12351.015: 99.1338% ( 1) 00:07:36.204 12351.015 - 12401.428: 99.1447% ( 2) 00:07:36.204 12401.428 - 12451.840: 99.1557% ( 2) 00:07:36.204 12451.840 - 12502.252: 99.1612% ( 1) 00:07:36.204 12502.252 - 12552.665: 99.1721% ( 2) 00:07:36.204 12552.665 - 12603.077: 99.1886% ( 3) 00:07:36.204 12653.489 - 12703.902: 99.2050% ( 3) 00:07:36.204 12703.902 - 12754.314: 99.2105% ( 1) 00:07:36.204 12754.314 - 12804.726: 99.2160% ( 1) 00:07:36.204 12855.138 - 12905.551: 99.2270% ( 2) 00:07:36.204 13006.375 - 13107.200: 99.2434% ( 3) 00:07:36.204 13107.200 - 13208.025: 99.2654% ( 4) 00:07:36.204 13208.025 - 13308.849: 99.2763% ( 2) 00:07:36.204 13308.849 - 13409.674: 99.2982% ( 4) 00:07:36.204 25710.277 - 25811.102: 99.3037% ( 1) 00:07:36.204 25811.102 - 26012.751: 99.3147% ( 2) 00:07:36.204 26012.751 - 26214.400: 99.3586% ( 8) 00:07:36.204 26214.400 - 26416.049: 99.4408% ( 15) 00:07:36.204 26416.049 - 26617.698: 99.5066% ( 12) 00:07:36.204 26617.698 - 26819.348: 99.5504% ( 8) 00:07:36.204 26819.348 - 27020.997: 99.6107% ( 11) 00:07:36.204 27222.646 - 27424.295: 99.6436% ( 6) 00:07:36.204 27424.295 - 27625.945: 99.6491% ( 1) 00:07:36.204 30449.034 - 30650.683: 99.6820% ( 6) 00:07:36.204 30650.683 - 30852.332: 99.7314% ( 9) 00:07:36.204 30852.332 - 31053.982: 99.7643% ( 6) 00:07:36.204 31053.982 - 31255.631: 99.8026% ( 7) 00:07:36.204 31255.631 - 31457.280: 99.8410% ( 7) 00:07:36.204 31457.280 - 31658.929: 99.8849% ( 8) 00:07:36.204 31658.929 - 31860.578: 99.9287% ( 8) 00:07:36.204 31860.578 - 32062.228: 99.9726% ( 8) 00:07:36.204 32062.228 - 32263.877: 100.0000% ( 5) 00:07:36.204 00:07:36.204 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:36.204 ============================================================================== 00:07:36.204 Range in us Cumulative IO count 00:07:36.204 5116.849 - 5142.055: 0.0055% ( 1) 00:07:36.204 5268.086 - 5293.292: 0.0274% ( 4) 00:07:36.204 5293.292 - 5318.498: 0.0822% ( 10) 00:07:36.204 5318.498 - 5343.705: 0.1151% ( 6) 00:07:36.204 5343.705 - 5368.911: 0.1645% ( 9) 00:07:36.204 5368.911 - 5394.117: 0.2248% ( 11) 00:07:36.204 5394.117 - 5419.323: 0.2961% ( 13) 00:07:36.204 5419.323 - 5444.529: 0.3838% ( 16) 00:07:36.204 5444.529 - 5469.735: 0.6798% ( 54) 00:07:36.204 5469.735 - 5494.942: 0.9649% ( 52) 00:07:36.204 5494.942 - 5520.148: 1.1404% ( 32) 00:07:36.204 5520.148 - 5545.354: 1.3761% ( 43) 00:07:36.204 5545.354 - 5570.560: 1.8092% ( 79) 00:07:36.204 5570.560 - 5595.766: 2.1875% ( 69) 00:07:36.204 5595.766 - 5620.972: 2.6151% ( 78) 00:07:36.204 5620.972 - 5646.178: 3.2730% ( 120) 00:07:36.204 5646.178 - 5671.385: 3.9803% ( 129) 00:07:36.204 5671.385 - 5696.591: 4.8355% ( 156) 00:07:36.204 5696.591 - 5721.797: 5.2632% ( 78) 00:07:36.204 5721.797 - 5747.003: 5.9868% ( 132) 00:07:36.204 5747.003 - 5772.209: 6.4748% ( 89) 00:07:36.204 5772.209 - 5797.415: 6.9627% ( 89) 00:07:36.204 5797.415 - 5822.622: 7.2697% ( 56) 00:07:36.204 5822.622 - 5847.828: 7.5274% ( 47) 00:07:36.204 5847.828 - 5873.034: 7.7961% ( 49) 00:07:36.204 5873.034 - 5898.240: 8.1963% ( 73) 00:07:36.204 5898.240 - 5923.446: 8.8377% ( 117) 00:07:36.204 5923.446 - 5948.652: 9.2489% ( 75) 00:07:36.205 5948.652 - 5973.858: 9.6491% ( 73) 00:07:36.205 5973.858 - 5999.065: 10.1754% ( 96) 00:07:36.205 5999.065 - 6024.271: 10.6195% ( 81) 00:07:36.205 6024.271 - 6049.477: 11.0800% ( 84) 00:07:36.205 6049.477 - 6074.683: 11.8257% ( 136) 00:07:36.205 6074.683 - 6099.889: 12.4781% ( 119) 00:07:36.205 6099.889 - 6125.095: 13.2840% ( 147) 00:07:36.205 6125.095 - 6150.302: 14.3750% ( 199) 00:07:36.205 6150.302 - 6175.508: 15.1864% ( 148) 00:07:36.205 6175.508 - 6200.714: 16.5789% ( 254) 00:07:36.205 6200.714 - 6225.920: 18.2018% ( 296) 00:07:36.205 6225.920 - 6251.126: 19.6327% ( 261) 00:07:36.205 6251.126 - 6276.332: 21.6064% ( 360) 00:07:36.205 6276.332 - 6301.538: 23.1908% ( 289) 00:07:36.205 6301.538 - 6326.745: 24.2818% ( 199) 00:07:36.205 6326.745 - 6351.951: 25.7346% ( 265) 00:07:36.205 6351.951 - 6377.157: 27.4507% ( 313) 00:07:36.205 6377.157 - 6402.363: 28.7939% ( 245) 00:07:36.205 6402.363 - 6427.569: 30.1864% ( 254) 00:07:36.205 6427.569 - 6452.775: 31.9134% ( 315) 00:07:36.205 6452.775 - 6503.188: 34.7862% ( 524) 00:07:36.205 6503.188 - 6553.600: 37.5219% ( 499) 00:07:36.205 6553.600 - 6604.012: 40.1316% ( 476) 00:07:36.205 6604.012 - 6654.425: 42.3794% ( 410) 00:07:36.205 6654.425 - 6704.837: 45.6469% ( 596) 00:07:36.205 6704.837 - 6755.249: 49.4737% ( 698) 00:07:36.205 6755.249 - 6805.662: 54.1173% ( 847) 00:07:36.205 6805.662 - 6856.074: 58.8213% ( 858) 00:07:36.205 6856.074 - 6906.486: 63.0811% ( 777) 00:07:36.205 6906.486 - 6956.898: 67.3629% ( 781) 00:07:36.205 6956.898 - 7007.311: 71.0910% ( 680) 00:07:36.205 7007.311 - 7057.723: 74.2379% ( 574) 00:07:36.205 7057.723 - 7108.135: 76.6557% ( 441) 00:07:36.205 7108.135 - 7158.548: 78.9474% ( 418) 00:07:36.205 7158.548 - 7208.960: 80.1261% ( 215) 00:07:36.205 7208.960 - 7259.372: 81.2226% ( 200) 00:07:36.205 7259.372 - 7309.785: 82.3575% ( 207) 00:07:36.205 7309.785 - 7360.197: 83.2456% ( 162) 00:07:36.205 7360.197 - 7410.609: 84.1118% ( 158) 00:07:36.205 7410.609 - 7461.022: 85.0768% ( 176) 00:07:36.205 7461.022 - 7511.434: 85.7511% ( 123) 00:07:36.205 7511.434 - 7561.846: 86.5132% ( 139) 00:07:36.205 7561.846 - 7612.258: 87.2807% ( 140) 00:07:36.205 7612.258 - 7662.671: 88.0099% ( 133) 00:07:36.205 7662.671 - 7713.083: 88.8048% ( 145) 00:07:36.205 7713.083 - 7763.495: 89.2873% ( 88) 00:07:36.205 7763.495 - 7813.908: 89.6765% ( 71) 00:07:36.205 7813.908 - 7864.320: 90.1590% ( 88) 00:07:36.205 7864.320 - 7914.732: 90.4825% ( 59) 00:07:36.205 7914.732 - 7965.145: 90.8114% ( 60) 00:07:36.205 7965.145 - 8015.557: 91.3322% ( 95) 00:07:36.205 8015.557 - 8065.969: 91.7873% ( 83) 00:07:36.205 8065.969 - 8116.382: 92.0066% ( 40) 00:07:36.205 8116.382 - 8166.794: 92.3136% ( 56) 00:07:36.205 8166.794 - 8217.206: 92.6261% ( 57) 00:07:36.205 8217.206 - 8267.618: 92.8728% ( 45) 00:07:36.205 8267.618 - 8318.031: 93.1250% ( 46) 00:07:36.205 8318.031 - 8368.443: 93.3717% ( 45) 00:07:36.205 8368.443 - 8418.855: 93.5362% ( 30) 00:07:36.205 8418.855 - 8469.268: 93.7390% ( 37) 00:07:36.205 8469.268 - 8519.680: 94.1118% ( 68) 00:07:36.205 8519.680 - 8570.092: 94.2654% ( 28) 00:07:36.205 8570.092 - 8620.505: 94.3969% ( 24) 00:07:36.205 8620.505 - 8670.917: 94.4901% ( 17) 00:07:36.205 8670.917 - 8721.329: 94.6491% ( 29) 00:07:36.205 8721.329 - 8771.742: 94.9616% ( 57) 00:07:36.205 8771.742 - 8822.154: 95.2248% ( 48) 00:07:36.205 8822.154 - 8872.566: 95.4550% ( 42) 00:07:36.205 8872.566 - 8922.978: 95.6305% ( 32) 00:07:36.205 8922.978 - 8973.391: 96.0197% ( 71) 00:07:36.205 8973.391 - 9023.803: 96.1075% ( 16) 00:07:36.205 9023.803 - 9074.215: 96.1842% ( 14) 00:07:36.205 9074.215 - 9124.628: 96.2829% ( 18) 00:07:36.205 9124.628 - 9175.040: 96.3487% ( 12) 00:07:36.205 9175.040 - 9225.452: 96.4090% ( 11) 00:07:36.205 9225.452 - 9275.865: 96.4638% ( 10) 00:07:36.205 9275.865 - 9326.277: 96.5077% ( 8) 00:07:36.205 9326.277 - 9376.689: 96.5461% ( 7) 00:07:36.205 9376.689 - 9427.102: 96.5789% ( 6) 00:07:36.205 9427.102 - 9477.514: 96.6173% ( 7) 00:07:36.205 9477.514 - 9527.926: 96.6557% ( 7) 00:07:36.205 9527.926 - 9578.338: 96.6886% ( 6) 00:07:36.205 9578.338 - 9628.751: 96.7160% ( 5) 00:07:36.205 9628.751 - 9679.163: 96.7763% ( 11) 00:07:36.205 9679.163 - 9729.575: 96.8202% ( 8) 00:07:36.205 9729.575 - 9779.988: 96.9079% ( 16) 00:07:36.205 9779.988 - 9830.400: 97.0340% ( 23) 00:07:36.205 9830.400 - 9880.812: 97.2039% ( 31) 00:07:36.205 9880.812 - 9931.225: 97.2423% ( 7) 00:07:36.205 9931.225 - 9981.637: 97.2862% ( 8) 00:07:36.205 9981.637 - 10032.049: 97.3300% ( 8) 00:07:36.205 10032.049 - 10082.462: 97.3575% ( 5) 00:07:36.205 10082.462 - 10132.874: 97.3904% ( 6) 00:07:36.205 10132.874 - 10183.286: 97.4178% ( 5) 00:07:36.205 10183.286 - 10233.698: 97.4616% ( 8) 00:07:36.205 10233.698 - 10284.111: 97.5000% ( 7) 00:07:36.205 10284.111 - 10334.523: 97.5439% ( 8) 00:07:36.205 10334.523 - 10384.935: 97.5768% ( 6) 00:07:36.205 10384.935 - 10435.348: 97.5987% ( 4) 00:07:36.205 10435.348 - 10485.760: 97.6206% ( 4) 00:07:36.205 10485.760 - 10536.172: 97.6316% ( 2) 00:07:36.205 10536.172 - 10586.585: 97.6480% ( 3) 00:07:36.205 10586.585 - 10636.997: 97.6535% ( 1) 00:07:36.205 10636.997 - 10687.409: 97.6700% ( 3) 00:07:36.205 10687.409 - 10737.822: 97.6754% ( 1) 00:07:36.205 10737.822 - 10788.234: 97.8015% ( 23) 00:07:36.205 10788.234 - 10838.646: 97.8509% ( 9) 00:07:36.205 10838.646 - 10889.058: 97.8564% ( 1) 00:07:36.205 10889.058 - 10939.471: 97.8673% ( 2) 00:07:36.205 10939.471 - 10989.883: 97.9112% ( 8) 00:07:36.205 10989.883 - 11040.295: 97.9441% ( 6) 00:07:36.205 11040.295 - 11090.708: 97.9605% ( 3) 00:07:36.205 11090.708 - 11141.120: 97.9825% ( 4) 00:07:36.205 11141.120 - 11191.532: 98.0044% ( 4) 00:07:36.205 11191.532 - 11241.945: 98.0208% ( 3) 00:07:36.205 11241.945 - 11292.357: 98.0537% ( 6) 00:07:36.205 11292.357 - 11342.769: 98.0866% ( 6) 00:07:36.205 11342.769 - 11393.182: 98.1524% ( 12) 00:07:36.205 11393.182 - 11443.594: 98.2182% ( 12) 00:07:36.205 11443.594 - 11494.006: 98.2840% ( 12) 00:07:36.205 11494.006 - 11544.418: 98.3498% ( 12) 00:07:36.205 11544.418 - 11594.831: 98.5362% ( 34) 00:07:36.205 11594.831 - 11645.243: 98.6184% ( 15) 00:07:36.205 11645.243 - 11695.655: 98.6732% ( 10) 00:07:36.205 11695.655 - 11746.068: 98.7390% ( 12) 00:07:36.205 11746.068 - 11796.480: 98.7884% ( 9) 00:07:36.205 11796.480 - 11846.892: 98.8377% ( 9) 00:07:36.205 11846.892 - 11897.305: 98.8542% ( 3) 00:07:36.205 11897.305 - 11947.717: 98.8761% ( 4) 00:07:36.205 11947.717 - 11998.129: 98.8925% ( 3) 00:07:36.205 11998.129 - 12048.542: 98.9090% ( 3) 00:07:36.205 12048.542 - 12098.954: 98.9309% ( 4) 00:07:36.205 12098.954 - 12149.366: 98.9474% ( 3) 00:07:36.205 12502.252 - 12552.665: 98.9529% ( 1) 00:07:36.205 12703.902 - 12754.314: 98.9583% ( 1) 00:07:36.205 12754.314 - 12804.726: 98.9638% ( 1) 00:07:36.205 12804.726 - 12855.138: 98.9803% ( 3) 00:07:36.205 12855.138 - 12905.551: 98.9857% ( 1) 00:07:36.205 12905.551 - 13006.375: 99.0132% ( 5) 00:07:36.205 13006.375 - 13107.200: 99.0351% ( 4) 00:07:36.205 13107.200 - 13208.025: 99.0515% ( 3) 00:07:36.205 13208.025 - 13308.849: 99.0680% ( 3) 00:07:36.205 13308.849 - 13409.674: 99.1064% ( 7) 00:07:36.205 13409.674 - 13510.498: 99.2434% ( 25) 00:07:36.205 13510.498 - 13611.323: 99.2873% ( 8) 00:07:36.205 13611.323 - 13712.148: 99.2982% ( 2) 00:07:36.205 24097.083 - 24197.908: 99.3092% ( 2) 00:07:36.205 24197.908 - 24298.732: 99.3311% ( 4) 00:07:36.205 24298.732 - 24399.557: 99.3531% ( 4) 00:07:36.205 24399.557 - 24500.382: 99.3695% ( 3) 00:07:36.205 24500.382 - 24601.206: 99.3914% ( 4) 00:07:36.205 24601.206 - 24702.031: 99.4134% ( 4) 00:07:36.205 24702.031 - 24802.855: 99.4353% ( 4) 00:07:36.205 24802.855 - 24903.680: 99.4627% ( 5) 00:07:36.205 24903.680 - 25004.505: 99.4846% ( 4) 00:07:36.205 25004.505 - 25105.329: 99.5066% ( 4) 00:07:36.205 25105.329 - 25206.154: 99.5285% ( 4) 00:07:36.205 25206.154 - 25306.978: 99.5504% ( 4) 00:07:36.205 25306.978 - 25407.803: 99.5724% ( 4) 00:07:36.205 25407.803 - 25508.628: 99.5943% ( 4) 00:07:36.205 25508.628 - 25609.452: 99.6162% ( 4) 00:07:36.205 25609.452 - 25710.277: 99.6382% ( 4) 00:07:36.205 25710.277 - 25811.102: 99.6491% ( 2) 00:07:36.205 28634.191 - 28835.840: 99.6601% ( 2) 00:07:36.205 28835.840 - 29037.489: 99.7039% ( 8) 00:07:36.205 29037.489 - 29239.138: 99.7478% ( 8) 00:07:36.205 29239.138 - 29440.788: 99.7971% ( 9) 00:07:36.205 29440.788 - 29642.437: 99.8355% ( 7) 00:07:36.205 29642.437 - 29844.086: 99.8794% ( 8) 00:07:36.205 29844.086 - 30045.735: 99.9232% ( 8) 00:07:36.205 30045.735 - 30247.385: 99.9671% ( 8) 00:07:36.205 30247.385 - 30449.034: 100.0000% ( 6) 00:07:36.205 00:07:36.205 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:36.205 ============================================================================== 00:07:36.205 Range in us Cumulative IO count 00:07:36.205 5066.437 - 5091.643: 0.0055% ( 1) 00:07:36.205 5116.849 - 5142.055: 0.0110% ( 1) 00:07:36.205 5142.055 - 5167.262: 0.0274% ( 3) 00:07:36.205 5167.262 - 5192.468: 0.0548% ( 5) 00:07:36.205 5192.468 - 5217.674: 0.0822% ( 5) 00:07:36.205 5217.674 - 5242.880: 0.1151% ( 6) 00:07:36.205 5242.880 - 5268.086: 0.2686% ( 28) 00:07:36.205 5268.086 - 5293.292: 0.3125% ( 8) 00:07:36.205 5293.292 - 5318.498: 0.3618% ( 9) 00:07:36.205 5318.498 - 5343.705: 0.4221% ( 11) 00:07:36.205 5343.705 - 5368.911: 0.4825% ( 11) 00:07:36.205 5368.911 - 5394.117: 0.5866% ( 19) 00:07:36.205 5394.117 - 5419.323: 0.7182% ( 24) 00:07:36.205 5419.323 - 5444.529: 0.8224% ( 19) 00:07:36.205 5444.529 - 5469.735: 1.1239% ( 55) 00:07:36.205 5469.735 - 5494.942: 1.4254% ( 55) 00:07:36.205 5494.942 - 5520.148: 1.6447% ( 40) 00:07:36.205 5520.148 - 5545.354: 1.8969% ( 46) 00:07:36.205 5545.354 - 5570.560: 2.3410% ( 81) 00:07:36.205 5570.560 - 5595.766: 2.6754% ( 61) 00:07:36.206 5595.766 - 5620.972: 3.1305% ( 83) 00:07:36.206 5620.972 - 5646.178: 3.6294% ( 91) 00:07:36.206 5646.178 - 5671.385: 4.1502% ( 95) 00:07:36.206 5671.385 - 5696.591: 4.7204% ( 104) 00:07:36.206 5696.591 - 5721.797: 5.4112% ( 126) 00:07:36.206 5721.797 - 5747.003: 5.9101% ( 91) 00:07:36.206 5747.003 - 5772.209: 6.3268% ( 76) 00:07:36.206 5772.209 - 5797.415: 6.7982% ( 86) 00:07:36.206 5797.415 - 5822.622: 7.1656% ( 67) 00:07:36.206 5822.622 - 5847.828: 7.6042% ( 80) 00:07:36.206 5847.828 - 5873.034: 8.0373% ( 79) 00:07:36.206 5873.034 - 5898.240: 8.5526% ( 94) 00:07:36.206 5898.240 - 5923.446: 9.2105% ( 120) 00:07:36.206 5923.446 - 5948.652: 9.6875% ( 87) 00:07:36.206 5948.652 - 5973.858: 10.1096% ( 77) 00:07:36.206 5973.858 - 5999.065: 10.6250% ( 94) 00:07:36.206 5999.065 - 6024.271: 11.0636% ( 80) 00:07:36.206 6024.271 - 6049.477: 11.5899% ( 96) 00:07:36.206 6049.477 - 6074.683: 12.1875% ( 109) 00:07:36.206 6074.683 - 6099.889: 12.9221% ( 134) 00:07:36.206 6099.889 - 6125.095: 13.7336% ( 148) 00:07:36.206 6125.095 - 6150.302: 14.5175% ( 143) 00:07:36.206 6150.302 - 6175.508: 15.2412% ( 132) 00:07:36.206 6175.508 - 6200.714: 16.6831% ( 263) 00:07:36.206 6200.714 - 6225.920: 18.2456% ( 285) 00:07:36.206 6225.920 - 6251.126: 20.1590% ( 349) 00:07:36.206 6251.126 - 6276.332: 21.3871% ( 224) 00:07:36.206 6276.332 - 6301.538: 22.8454% ( 266) 00:07:36.206 6301.538 - 6326.745: 24.3586% ( 276) 00:07:36.206 6326.745 - 6351.951: 26.1952% ( 335) 00:07:36.206 6351.951 - 6377.157: 27.7138% ( 277) 00:07:36.206 6377.157 - 6402.363: 29.0570% ( 245) 00:07:36.206 6402.363 - 6427.569: 30.5921% ( 280) 00:07:36.206 6427.569 - 6452.775: 32.3026% ( 312) 00:07:36.206 6452.775 - 6503.188: 35.5428% ( 591) 00:07:36.206 6503.188 - 6553.600: 37.7686% ( 406) 00:07:36.206 6553.600 - 6604.012: 40.4276% ( 485) 00:07:36.206 6604.012 - 6654.425: 42.8509% ( 442) 00:07:36.206 6654.425 - 6704.837: 45.5099% ( 485) 00:07:36.206 6704.837 - 6755.249: 49.7204% ( 768) 00:07:36.206 6755.249 - 6805.662: 54.0022% ( 781) 00:07:36.206 6805.662 - 6856.074: 58.3991% ( 802) 00:07:36.206 6856.074 - 6906.486: 62.8070% ( 804) 00:07:36.206 6906.486 - 6956.898: 66.7873% ( 726) 00:07:36.206 6956.898 - 7007.311: 70.2741% ( 636) 00:07:36.206 7007.311 - 7057.723: 73.6678% ( 619) 00:07:36.206 7057.723 - 7108.135: 76.1842% ( 459) 00:07:36.206 7108.135 - 7158.548: 78.4101% ( 406) 00:07:36.206 7158.548 - 7208.960: 79.8684% ( 266) 00:07:36.206 7208.960 - 7259.372: 81.1349% ( 231) 00:07:36.206 7259.372 - 7309.785: 82.0559% ( 168) 00:07:36.206 7309.785 - 7360.197: 82.7906% ( 134) 00:07:36.206 7360.197 - 7410.609: 83.7610% ( 177) 00:07:36.206 7410.609 - 7461.022: 84.4518% ( 126) 00:07:36.206 7461.022 - 7511.434: 85.6634% ( 221) 00:07:36.206 7511.434 - 7561.846: 86.3542% ( 126) 00:07:36.206 7561.846 - 7612.258: 86.9956% ( 117) 00:07:36.206 7612.258 - 7662.671: 87.5932% ( 109) 00:07:36.206 7662.671 - 7713.083: 88.0921% ( 91) 00:07:36.206 7713.083 - 7763.495: 88.8158% ( 132) 00:07:36.206 7763.495 - 7813.908: 89.3750% ( 102) 00:07:36.206 7813.908 - 7864.320: 89.8026% ( 78) 00:07:36.206 7864.320 - 7914.732: 90.2522% ( 82) 00:07:36.206 7914.732 - 7965.145: 90.9978% ( 136) 00:07:36.206 7965.145 - 8015.557: 91.3980% ( 73) 00:07:36.206 8015.557 - 8065.969: 91.7379% ( 62) 00:07:36.206 8065.969 - 8116.382: 91.9518% ( 39) 00:07:36.206 8116.382 - 8166.794: 92.1436% ( 35) 00:07:36.206 8166.794 - 8217.206: 92.4068% ( 48) 00:07:36.206 8217.206 - 8267.618: 92.7138% ( 56) 00:07:36.206 8267.618 - 8318.031: 92.8564% ( 26) 00:07:36.206 8318.031 - 8368.443: 92.9825% ( 23) 00:07:36.206 8368.443 - 8418.855: 93.2566% ( 50) 00:07:36.206 8418.855 - 8469.268: 93.7061% ( 82) 00:07:36.206 8469.268 - 8519.680: 94.0132% ( 56) 00:07:36.206 8519.680 - 8570.092: 94.2654% ( 46) 00:07:36.206 8570.092 - 8620.505: 94.4189% ( 28) 00:07:36.206 8620.505 - 8670.917: 94.5121% ( 17) 00:07:36.206 8670.917 - 8721.329: 94.6327% ( 22) 00:07:36.206 8721.329 - 8771.742: 94.8629% ( 42) 00:07:36.206 8771.742 - 8822.154: 95.1371% ( 50) 00:07:36.206 8822.154 - 8872.566: 95.4550% ( 58) 00:07:36.206 8872.566 - 8922.978: 95.5866% ( 24) 00:07:36.206 8922.978 - 8973.391: 95.6963% ( 20) 00:07:36.206 8973.391 - 9023.803: 95.7730% ( 14) 00:07:36.206 9023.803 - 9074.215: 95.8553% ( 15) 00:07:36.206 9074.215 - 9124.628: 95.9211% ( 12) 00:07:36.206 9124.628 - 9175.040: 95.9649% ( 8) 00:07:36.206 9175.040 - 9225.452: 96.0088% ( 8) 00:07:36.206 9225.452 - 9275.865: 96.0581% ( 9) 00:07:36.206 9275.865 - 9326.277: 96.1458% ( 16) 00:07:36.206 9326.277 - 9376.689: 96.2336% ( 16) 00:07:36.206 9376.689 - 9427.102: 96.3048% ( 13) 00:07:36.206 9427.102 - 9477.514: 96.4748% ( 31) 00:07:36.206 9477.514 - 9527.926: 96.5515% ( 14) 00:07:36.206 9527.926 - 9578.338: 96.6173% ( 12) 00:07:36.206 9578.338 - 9628.751: 96.6776% ( 11) 00:07:36.206 9628.751 - 9679.163: 96.7544% ( 14) 00:07:36.206 9679.163 - 9729.575: 96.7982% ( 8) 00:07:36.206 9729.575 - 9779.988: 96.8750% ( 14) 00:07:36.206 9779.988 - 9830.400: 96.9682% ( 17) 00:07:36.206 9830.400 - 9880.812: 97.0121% ( 8) 00:07:36.206 9880.812 - 9931.225: 97.0724% ( 11) 00:07:36.206 9931.225 - 9981.637: 97.1546% ( 15) 00:07:36.206 9981.637 - 10032.049: 97.3849% ( 42) 00:07:36.206 10032.049 - 10082.462: 97.4781% ( 17) 00:07:36.206 10082.462 - 10132.874: 97.5329% ( 10) 00:07:36.206 10132.874 - 10183.286: 97.5822% ( 9) 00:07:36.206 10183.286 - 10233.698: 97.6151% ( 6) 00:07:36.206 10233.698 - 10284.111: 97.6480% ( 6) 00:07:36.206 10284.111 - 10334.523: 97.6974% ( 9) 00:07:36.206 10334.523 - 10384.935: 97.7522% ( 10) 00:07:36.206 10384.935 - 10435.348: 97.7851% ( 6) 00:07:36.206 10435.348 - 10485.760: 97.8289% ( 8) 00:07:36.206 10485.760 - 10536.172: 97.8618% ( 6) 00:07:36.206 10536.172 - 10586.585: 97.9112% ( 9) 00:07:36.206 10586.585 - 10636.997: 98.0811% ( 31) 00:07:36.206 10636.997 - 10687.409: 98.1195% ( 7) 00:07:36.206 10687.409 - 10737.822: 98.1634% ( 8) 00:07:36.206 10737.822 - 10788.234: 98.2072% ( 8) 00:07:36.206 10788.234 - 10838.646: 98.2566% ( 9) 00:07:36.206 10838.646 - 10889.058: 98.2950% ( 7) 00:07:36.206 10889.058 - 10939.471: 98.3224% ( 5) 00:07:36.206 10939.471 - 10989.883: 98.3443% ( 4) 00:07:36.206 10989.883 - 11040.295: 98.3607% ( 3) 00:07:36.206 11040.295 - 11090.708: 98.3772% ( 3) 00:07:36.206 11090.708 - 11141.120: 98.3991% ( 4) 00:07:36.206 11141.120 - 11191.532: 98.4211% ( 4) 00:07:36.206 11191.532 - 11241.945: 98.4375% ( 3) 00:07:36.206 11241.945 - 11292.357: 98.4594% ( 4) 00:07:36.206 11292.357 - 11342.769: 98.4814% ( 4) 00:07:36.206 11342.769 - 11393.182: 98.4978% ( 3) 00:07:36.206 11393.182 - 11443.594: 98.5197% ( 4) 00:07:36.206 11443.594 - 11494.006: 98.5417% ( 4) 00:07:36.206 11494.006 - 11544.418: 98.5581% ( 3) 00:07:36.206 11544.418 - 11594.831: 98.5746% ( 3) 00:07:36.206 11594.831 - 11645.243: 98.5965% ( 4) 00:07:36.206 11998.129 - 12048.542: 98.6020% ( 1) 00:07:36.206 12098.954 - 12149.366: 98.6239% ( 4) 00:07:36.206 12149.366 - 12199.778: 98.6513% ( 5) 00:07:36.206 12199.778 - 12250.191: 98.6842% ( 6) 00:07:36.206 12250.191 - 12300.603: 98.7719% ( 16) 00:07:36.206 12300.603 - 12351.015: 98.9090% ( 25) 00:07:36.206 12351.015 - 12401.428: 98.9254% ( 3) 00:07:36.206 12401.428 - 12451.840: 98.9419% ( 3) 00:07:36.206 12451.840 - 12502.252: 98.9474% ( 1) 00:07:36.206 13006.375 - 13107.200: 98.9857% ( 7) 00:07:36.206 13107.200 - 13208.025: 99.0241% ( 7) 00:07:36.206 13208.025 - 13308.849: 99.0515% ( 5) 00:07:36.206 13308.849 - 13409.674: 99.1283% ( 14) 00:07:36.206 13409.674 - 13510.498: 99.2599% ( 24) 00:07:36.206 13510.498 - 13611.323: 99.2982% ( 7) 00:07:36.206 22584.714 - 22685.538: 99.3092% ( 2) 00:07:36.206 22685.538 - 22786.363: 99.3311% ( 4) 00:07:36.206 22786.363 - 22887.188: 99.3531% ( 4) 00:07:36.206 22887.188 - 22988.012: 99.3750% ( 4) 00:07:36.206 22988.012 - 23088.837: 99.3969% ( 4) 00:07:36.206 23088.837 - 23189.662: 99.4189% ( 4) 00:07:36.206 23189.662 - 23290.486: 99.4408% ( 4) 00:07:36.206 23290.486 - 23391.311: 99.4627% ( 4) 00:07:36.206 23391.311 - 23492.135: 99.4846% ( 4) 00:07:36.206 23492.135 - 23592.960: 99.5011% ( 3) 00:07:36.206 23592.960 - 23693.785: 99.5230% ( 4) 00:07:36.206 23693.785 - 23794.609: 99.5450% ( 4) 00:07:36.206 23794.609 - 23895.434: 99.5669% ( 4) 00:07:36.206 23895.434 - 23996.258: 99.5888% ( 4) 00:07:36.206 23996.258 - 24097.083: 99.6107% ( 4) 00:07:36.206 24097.083 - 24197.908: 99.6327% ( 4) 00:07:36.206 24197.908 - 24298.732: 99.6491% ( 3) 00:07:36.206 27020.997 - 27222.646: 99.6656% ( 3) 00:07:36.206 27222.646 - 27424.295: 99.7039% ( 7) 00:07:36.206 27424.295 - 27625.945: 99.7478% ( 8) 00:07:36.206 27625.945 - 27827.594: 99.7917% ( 8) 00:07:36.206 27827.594 - 28029.243: 99.8355% ( 8) 00:07:36.206 28029.243 - 28230.892: 99.8739% ( 7) 00:07:36.206 28230.892 - 28432.542: 99.9178% ( 8) 00:07:36.206 28432.542 - 28634.191: 99.9616% ( 8) 00:07:36.206 28634.191 - 28835.840: 100.0000% ( 7) 00:07:36.206 00:07:36.206 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:36.206 ============================================================================== 00:07:36.206 Range in us Cumulative IO count 00:07:36.206 5116.849 - 5142.055: 0.0055% ( 1) 00:07:36.206 5167.262 - 5192.468: 0.0110% ( 1) 00:07:36.206 5192.468 - 5217.674: 0.0219% ( 2) 00:07:36.206 5217.674 - 5242.880: 0.0329% ( 2) 00:07:36.206 5242.880 - 5268.086: 0.0768% ( 8) 00:07:36.206 5268.086 - 5293.292: 0.1206% ( 8) 00:07:36.206 5293.292 - 5318.498: 0.1864% ( 12) 00:07:36.206 5318.498 - 5343.705: 0.2577% ( 13) 00:07:36.206 5343.705 - 5368.911: 0.3618% ( 19) 00:07:36.206 5368.911 - 5394.117: 0.5263% ( 30) 00:07:36.206 5394.117 - 5419.323: 0.7346% ( 38) 00:07:36.206 5419.323 - 5444.529: 0.8827% ( 27) 00:07:36.206 5444.529 - 5469.735: 1.1458% ( 48) 00:07:36.206 5469.735 - 5494.942: 1.3213% ( 32) 00:07:36.206 5494.942 - 5520.148: 1.5899% ( 49) 00:07:36.206 5520.148 - 5545.354: 2.0395% ( 82) 00:07:36.207 5545.354 - 5570.560: 2.3629% ( 59) 00:07:36.207 5570.560 - 5595.766: 2.8838% ( 95) 00:07:36.207 5595.766 - 5620.972: 3.3827% ( 91) 00:07:36.207 5620.972 - 5646.178: 4.1393% ( 138) 00:07:36.207 5646.178 - 5671.385: 4.6546% ( 94) 00:07:36.207 5671.385 - 5696.591: 5.2303% ( 105) 00:07:36.207 5696.591 - 5721.797: 5.7018% ( 86) 00:07:36.207 5721.797 - 5747.003: 6.2939% ( 108) 00:07:36.207 5747.003 - 5772.209: 6.8969% ( 110) 00:07:36.207 5772.209 - 5797.415: 7.4726% ( 105) 00:07:36.207 5797.415 - 5822.622: 7.7467% ( 50) 00:07:36.207 5822.622 - 5847.828: 8.0044% ( 47) 00:07:36.207 5847.828 - 5873.034: 8.3827% ( 69) 00:07:36.207 5873.034 - 5898.240: 8.7116% ( 60) 00:07:36.207 5898.240 - 5923.446: 9.2982% ( 107) 00:07:36.207 5923.446 - 5948.652: 9.6765% ( 69) 00:07:36.207 5948.652 - 5973.858: 9.9836% ( 56) 00:07:36.207 5973.858 - 5999.065: 10.5263% ( 99) 00:07:36.207 5999.065 - 6024.271: 10.8991% ( 68) 00:07:36.207 6024.271 - 6049.477: 11.3706% ( 86) 00:07:36.207 6049.477 - 6074.683: 11.9298% ( 102) 00:07:36.207 6074.683 - 6099.889: 12.5110% ( 106) 00:07:36.207 6099.889 - 6125.095: 13.3059% ( 145) 00:07:36.207 6125.095 - 6150.302: 14.3805% ( 196) 00:07:36.207 6150.302 - 6175.508: 15.3564% ( 178) 00:07:36.207 6175.508 - 6200.714: 16.5296% ( 214) 00:07:36.207 6200.714 - 6225.920: 17.8947% ( 249) 00:07:36.207 6225.920 - 6251.126: 19.2599% ( 249) 00:07:36.207 6251.126 - 6276.332: 21.0636% ( 329) 00:07:36.207 6276.332 - 6301.538: 22.5987% ( 280) 00:07:36.207 6301.538 - 6326.745: 23.7884% ( 217) 00:07:36.207 6326.745 - 6351.951: 25.1535% ( 249) 00:07:36.207 6351.951 - 6377.157: 26.9408% ( 326) 00:07:36.207 6377.157 - 6402.363: 28.6075% ( 304) 00:07:36.207 6402.363 - 6427.569: 30.0768% ( 268) 00:07:36.207 6427.569 - 6452.775: 31.8860% ( 330) 00:07:36.207 6452.775 - 6503.188: 35.3180% ( 626) 00:07:36.207 6503.188 - 6553.600: 38.1086% ( 509) 00:07:36.207 6553.600 - 6604.012: 40.7621% ( 484) 00:07:36.207 6604.012 - 6654.425: 43.5965% ( 517) 00:07:36.207 6654.425 - 6704.837: 46.3925% ( 510) 00:07:36.207 6704.837 - 6755.249: 50.1316% ( 682) 00:07:36.207 6755.249 - 6805.662: 53.9748% ( 701) 00:07:36.207 6805.662 - 6856.074: 57.6700% ( 674) 00:07:36.207 6856.074 - 6906.486: 62.1382% ( 815) 00:07:36.207 6906.486 - 6956.898: 65.6524% ( 641) 00:07:36.207 6956.898 - 7007.311: 69.0899% ( 627) 00:07:36.207 7007.311 - 7057.723: 72.7796% ( 673) 00:07:36.207 7057.723 - 7108.135: 75.6250% ( 519) 00:07:36.207 7108.135 - 7158.548: 77.9002% ( 415) 00:07:36.207 7158.548 - 7208.960: 79.4134% ( 276) 00:07:36.207 7208.960 - 7259.372: 80.8224% ( 257) 00:07:36.207 7259.372 - 7309.785: 82.1382% ( 240) 00:07:36.207 7309.785 - 7360.197: 83.1853% ( 191) 00:07:36.207 7360.197 - 7410.609: 84.1996% ( 185) 00:07:36.207 7410.609 - 7461.022: 85.0822% ( 161) 00:07:36.207 7461.022 - 7511.434: 86.0636% ( 179) 00:07:36.207 7511.434 - 7561.846: 86.6612% ( 109) 00:07:36.207 7561.846 - 7612.258: 87.3026% ( 117) 00:07:36.207 7612.258 - 7662.671: 87.7632% ( 84) 00:07:36.207 7662.671 - 7713.083: 88.3333% ( 104) 00:07:36.207 7713.083 - 7763.495: 88.7555% ( 77) 00:07:36.207 7763.495 - 7813.908: 89.2489% ( 90) 00:07:36.207 7813.908 - 7864.320: 89.8410% ( 108) 00:07:36.207 7864.320 - 7914.732: 90.2577% ( 76) 00:07:36.207 7914.732 - 7965.145: 90.6140% ( 65) 00:07:36.207 7965.145 - 8015.557: 91.0197% ( 74) 00:07:36.207 8015.557 - 8065.969: 91.4748% ( 83) 00:07:36.207 8065.969 - 8116.382: 91.7599% ( 52) 00:07:36.207 8116.382 - 8166.794: 92.1107% ( 64) 00:07:36.207 8166.794 - 8217.206: 92.4836% ( 68) 00:07:36.207 8217.206 - 8267.618: 92.8235% ( 62) 00:07:36.207 8267.618 - 8318.031: 93.0811% ( 47) 00:07:36.207 8318.031 - 8368.443: 93.3553% ( 50) 00:07:36.207 8368.443 - 8418.855: 93.4868% ( 24) 00:07:36.207 8418.855 - 8469.268: 93.6020% ( 21) 00:07:36.207 8469.268 - 8519.680: 93.7445% ( 26) 00:07:36.207 8519.680 - 8570.092: 93.9638% ( 40) 00:07:36.207 8570.092 - 8620.505: 94.2105% ( 45) 00:07:36.207 8620.505 - 8670.917: 94.4572% ( 45) 00:07:36.207 8670.917 - 8721.329: 94.7588% ( 55) 00:07:36.207 8721.329 - 8771.742: 94.9836% ( 41) 00:07:36.207 8771.742 - 8822.154: 95.3344% ( 64) 00:07:36.207 8822.154 - 8872.566: 95.4550% ( 22) 00:07:36.207 8872.566 - 8922.978: 95.5318% ( 14) 00:07:36.207 8922.978 - 8973.391: 95.6086% ( 14) 00:07:36.207 8973.391 - 9023.803: 95.7346% ( 23) 00:07:36.207 9023.803 - 9074.215: 95.8662% ( 24) 00:07:36.207 9074.215 - 9124.628: 96.1075% ( 44) 00:07:36.207 9124.628 - 9175.040: 96.2007% ( 17) 00:07:36.207 9175.040 - 9225.452: 96.2719% ( 13) 00:07:36.207 9225.452 - 9275.865: 96.3322% ( 11) 00:07:36.207 9275.865 - 9326.277: 96.3651% ( 6) 00:07:36.207 9326.277 - 9376.689: 96.3925% ( 5) 00:07:36.207 9376.689 - 9427.102: 96.4090% ( 3) 00:07:36.207 9427.102 - 9477.514: 96.4309% ( 4) 00:07:36.207 9477.514 - 9527.926: 96.4583% ( 5) 00:07:36.207 9527.926 - 9578.338: 96.5296% ( 13) 00:07:36.207 9578.338 - 9628.751: 96.5954% ( 12) 00:07:36.207 9628.751 - 9679.163: 96.6776% ( 15) 00:07:36.207 9679.163 - 9729.575: 96.8147% ( 25) 00:07:36.207 9729.575 - 9779.988: 96.8695% ( 10) 00:07:36.207 9779.988 - 9830.400: 96.9024% ( 6) 00:07:36.207 9830.400 - 9880.812: 96.9408% ( 7) 00:07:36.207 9880.812 - 9931.225: 97.0011% ( 11) 00:07:36.207 9931.225 - 9981.637: 97.1162% ( 21) 00:07:36.207 9981.637 - 10032.049: 97.2259% ( 20) 00:07:36.207 10032.049 - 10082.462: 97.3246% ( 18) 00:07:36.207 10082.462 - 10132.874: 97.3575% ( 6) 00:07:36.207 10132.874 - 10183.286: 97.4178% ( 11) 00:07:36.207 10183.286 - 10233.698: 97.4726% ( 10) 00:07:36.207 10233.698 - 10284.111: 97.5439% ( 13) 00:07:36.207 10284.111 - 10334.523: 97.7851% ( 44) 00:07:36.207 10334.523 - 10384.935: 97.8454% ( 11) 00:07:36.207 10384.935 - 10435.348: 97.8783% ( 6) 00:07:36.207 10435.348 - 10485.760: 97.9221% ( 8) 00:07:36.207 10485.760 - 10536.172: 97.9660% ( 8) 00:07:36.207 10536.172 - 10586.585: 98.0154% ( 9) 00:07:36.207 10586.585 - 10636.997: 98.0592% ( 8) 00:07:36.207 10636.997 - 10687.409: 98.1195% ( 11) 00:07:36.207 10687.409 - 10737.822: 98.2675% ( 27) 00:07:36.207 10737.822 - 10788.234: 98.3169% ( 9) 00:07:36.207 10788.234 - 10838.646: 98.3662% ( 9) 00:07:36.207 10838.646 - 10889.058: 98.4046% ( 7) 00:07:36.207 10889.058 - 10939.471: 98.4320% ( 5) 00:07:36.207 10939.471 - 10989.883: 98.4485% ( 3) 00:07:36.207 10989.883 - 11040.295: 98.4759% ( 5) 00:07:36.207 11040.295 - 11090.708: 98.4978% ( 4) 00:07:36.207 11090.708 - 11141.120: 98.5197% ( 4) 00:07:36.207 11141.120 - 11191.532: 98.5362% ( 3) 00:07:36.207 11191.532 - 11241.945: 98.5581% ( 4) 00:07:36.207 11241.945 - 11292.357: 98.5746% ( 3) 00:07:36.207 11292.357 - 11342.769: 98.5910% ( 3) 00:07:36.207 11342.769 - 11393.182: 98.5965% ( 1) 00:07:36.207 11897.305 - 11947.717: 98.6239% ( 5) 00:07:36.207 11947.717 - 11998.129: 98.6404% ( 3) 00:07:36.207 11998.129 - 12048.542: 98.6623% ( 4) 00:07:36.208 12048.542 - 12098.954: 98.6897% ( 5) 00:07:36.208 12098.954 - 12149.366: 98.7061% ( 3) 00:07:36.208 12149.366 - 12199.778: 98.7281% ( 4) 00:07:36.208 12199.778 - 12250.191: 98.7500% ( 4) 00:07:36.208 12250.191 - 12300.603: 98.7719% ( 4) 00:07:36.208 12300.603 - 12351.015: 98.7884% ( 3) 00:07:36.208 12351.015 - 12401.428: 98.8048% ( 3) 00:07:36.208 12401.428 - 12451.840: 98.8268% ( 4) 00:07:36.208 12451.840 - 12502.252: 98.8432% ( 3) 00:07:36.208 12502.252 - 12552.665: 98.8596% ( 3) 00:07:36.208 12552.665 - 12603.077: 98.8761% ( 3) 00:07:36.208 12603.077 - 12653.489: 98.8980% ( 4) 00:07:36.208 12653.489 - 12703.902: 98.9200% ( 4) 00:07:36.208 12703.902 - 12754.314: 98.9419% ( 4) 00:07:36.208 12754.314 - 12804.726: 98.9474% ( 1) 00:07:36.208 13006.375 - 13107.200: 98.9803% ( 6) 00:07:36.208 13107.200 - 13208.025: 99.0351% ( 10) 00:07:36.208 13208.025 - 13308.849: 99.1009% ( 12) 00:07:36.208 13308.849 - 13409.674: 99.1557% ( 10) 00:07:36.208 13409.674 - 13510.498: 99.1612% ( 1) 00:07:36.208 13510.498 - 13611.323: 99.1721% ( 2) 00:07:36.208 13611.323 - 13712.148: 99.1941% ( 4) 00:07:36.208 13712.148 - 13812.972: 99.2215% ( 5) 00:07:36.208 13812.972 - 13913.797: 99.2379% ( 3) 00:07:36.208 13913.797 - 14014.622: 99.2818% ( 8) 00:07:36.208 14014.622 - 14115.446: 99.2982% ( 3) 00:07:36.208 20870.695 - 20971.520: 99.3092% ( 2) 00:07:36.208 20971.520 - 21072.345: 99.3257% ( 3) 00:07:36.208 21072.345 - 21173.169: 99.3476% ( 4) 00:07:36.208 21173.169 - 21273.994: 99.3695% ( 4) 00:07:36.208 21273.994 - 21374.818: 99.3914% ( 4) 00:07:36.208 21374.818 - 21475.643: 99.4134% ( 4) 00:07:36.208 21475.643 - 21576.468: 99.4353% ( 4) 00:07:36.208 21576.468 - 21677.292: 99.4572% ( 4) 00:07:36.208 21677.292 - 21778.117: 99.4792% ( 4) 00:07:36.208 21778.117 - 21878.942: 99.5011% ( 4) 00:07:36.208 21878.942 - 21979.766: 99.5230% ( 4) 00:07:36.208 21979.766 - 22080.591: 99.5395% ( 3) 00:07:36.208 22080.591 - 22181.415: 99.5669% ( 5) 00:07:36.208 22181.415 - 22282.240: 99.5833% ( 3) 00:07:36.208 22282.240 - 22383.065: 99.6053% ( 4) 00:07:36.208 22383.065 - 22483.889: 99.6272% ( 4) 00:07:36.208 22483.889 - 22584.714: 99.6491% ( 4) 00:07:36.208 25407.803 - 25508.628: 99.6656% ( 3) 00:07:36.208 25508.628 - 25609.452: 99.6820% ( 3) 00:07:36.208 25609.452 - 25710.277: 99.7039% ( 4) 00:07:36.208 25710.277 - 25811.102: 99.7259% ( 4) 00:07:36.208 25811.102 - 26012.751: 99.7697% ( 8) 00:07:36.208 26012.751 - 26214.400: 99.8191% ( 9) 00:07:36.208 26214.400 - 26416.049: 99.8629% ( 8) 00:07:36.208 26416.049 - 26617.698: 99.9123% ( 9) 00:07:36.208 26617.698 - 26819.348: 99.9561% ( 8) 00:07:36.208 26819.348 - 27020.997: 100.0000% ( 8) 00:07:36.208 00:07:36.208 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:36.208 ============================================================================== 00:07:36.208 Range in us Cumulative IO count 00:07:36.208 5066.437 - 5091.643: 0.0055% ( 1) 00:07:36.208 5091.643 - 5116.849: 0.0110% ( 1) 00:07:36.208 5142.055 - 5167.262: 0.0219% ( 2) 00:07:36.208 5192.468 - 5217.674: 0.0329% ( 2) 00:07:36.208 5242.880 - 5268.086: 0.0384% ( 1) 00:07:36.208 5268.086 - 5293.292: 0.0603% ( 4) 00:07:36.208 5293.292 - 5318.498: 0.1042% ( 8) 00:07:36.208 5318.498 - 5343.705: 0.1754% ( 13) 00:07:36.208 5343.705 - 5368.911: 0.2357% ( 11) 00:07:36.208 5368.911 - 5394.117: 0.3125% ( 14) 00:07:36.208 5394.117 - 5419.323: 0.4496% ( 25) 00:07:36.208 5419.323 - 5444.529: 0.6524% ( 37) 00:07:36.208 5444.529 - 5469.735: 0.9046% ( 46) 00:07:36.208 5469.735 - 5494.942: 1.1458% ( 44) 00:07:36.208 5494.942 - 5520.148: 1.5461% ( 73) 00:07:36.208 5520.148 - 5545.354: 1.8860% ( 62) 00:07:36.208 5545.354 - 5570.560: 2.3410% ( 83) 00:07:36.208 5570.560 - 5595.766: 2.7522% ( 75) 00:07:36.208 5595.766 - 5620.972: 3.3443% ( 108) 00:07:36.208 5620.972 - 5646.178: 4.1831% ( 153) 00:07:36.208 5646.178 - 5671.385: 4.8629% ( 124) 00:07:36.208 5671.385 - 5696.591: 5.4057% ( 99) 00:07:36.208 5696.591 - 5721.797: 5.8333% ( 78) 00:07:36.208 5721.797 - 5747.003: 6.2664% ( 79) 00:07:36.208 5747.003 - 5772.209: 6.5899% ( 59) 00:07:36.208 5772.209 - 5797.415: 6.8531% ( 48) 00:07:36.208 5797.415 - 5822.622: 7.1601% ( 56) 00:07:36.208 5822.622 - 5847.828: 7.4616% ( 55) 00:07:36.208 5847.828 - 5873.034: 8.1798% ( 131) 00:07:36.208 5873.034 - 5898.240: 8.6404% ( 84) 00:07:36.208 5898.240 - 5923.446: 9.0735% ( 79) 00:07:36.208 5923.446 - 5948.652: 9.3476% ( 50) 00:07:36.208 5948.652 - 5973.858: 9.8026% ( 83) 00:07:36.208 5973.858 - 5999.065: 10.4879% ( 125) 00:07:36.208 5999.065 - 6024.271: 10.9759% ( 89) 00:07:36.208 6024.271 - 6049.477: 11.5461% ( 104) 00:07:36.208 6049.477 - 6074.683: 12.3410% ( 145) 00:07:36.208 6074.683 - 6099.889: 12.8893% ( 100) 00:07:36.208 6099.889 - 6125.095: 13.6952% ( 147) 00:07:36.208 6125.095 - 6150.302: 14.5066% ( 148) 00:07:36.208 6150.302 - 6175.508: 15.5537% ( 191) 00:07:36.208 6175.508 - 6200.714: 16.5406% ( 180) 00:07:36.208 6200.714 - 6225.920: 18.3882% ( 337) 00:07:36.208 6225.920 - 6251.126: 20.2796% ( 345) 00:07:36.208 6251.126 - 6276.332: 21.8202% ( 281) 00:07:36.208 6276.332 - 6301.538: 23.2237% ( 256) 00:07:36.208 6301.538 - 6326.745: 24.4189% ( 218) 00:07:36.208 6326.745 - 6351.951: 25.7237% ( 238) 00:07:36.208 6351.951 - 6377.157: 27.7303% ( 366) 00:07:36.208 6377.157 - 6402.363: 29.2270% ( 273) 00:07:36.208 6402.363 - 6427.569: 30.9868% ( 321) 00:07:36.208 6427.569 - 6452.775: 32.0724% ( 198) 00:07:36.208 6452.775 - 6503.188: 35.1754% ( 566) 00:07:36.208 6503.188 - 6553.600: 37.6535% ( 452) 00:07:36.208 6553.600 - 6604.012: 40.0768% ( 442) 00:07:36.208 6604.012 - 6654.425: 42.5548% ( 452) 00:07:36.208 6654.425 - 6704.837: 45.7730% ( 587) 00:07:36.208 6704.837 - 6755.249: 50.1316% ( 795) 00:07:36.208 6755.249 - 6805.662: 53.8487% ( 678) 00:07:36.208 6805.662 - 6856.074: 57.9989% ( 757) 00:07:36.208 6856.074 - 6906.486: 62.7248% ( 862) 00:07:36.208 6906.486 - 6956.898: 66.3268% ( 657) 00:07:36.208 6956.898 - 7007.311: 69.7423% ( 623) 00:07:36.208 7007.311 - 7057.723: 72.5548% ( 513) 00:07:36.208 7057.723 - 7108.135: 75.5647% ( 549) 00:07:36.208 7108.135 - 7158.548: 77.8070% ( 409) 00:07:36.208 7158.548 - 7208.960: 79.6875% ( 343) 00:07:36.208 7208.960 - 7259.372: 80.8224% ( 207) 00:07:36.208 7259.372 - 7309.785: 82.0230% ( 219) 00:07:36.208 7309.785 - 7360.197: 83.0044% ( 179) 00:07:36.208 7360.197 - 7410.609: 83.9583% ( 174) 00:07:36.208 7410.609 - 7461.022: 84.8629% ( 165) 00:07:36.208 7461.022 - 7511.434: 85.7292% ( 158) 00:07:36.208 7511.434 - 7561.846: 86.2664% ( 98) 00:07:36.208 7561.846 - 7612.258: 86.7050% ( 80) 00:07:36.208 7612.258 - 7662.671: 87.2533% ( 100) 00:07:36.208 7662.671 - 7713.083: 87.9715% ( 131) 00:07:36.208 7713.083 - 7763.495: 88.4759% ( 92) 00:07:36.208 7763.495 - 7813.908: 89.2105% ( 134) 00:07:36.208 7813.908 - 7864.320: 89.5669% ( 65) 00:07:36.208 7864.320 - 7914.732: 89.9890% ( 77) 00:07:36.208 7914.732 - 7965.145: 90.5044% ( 94) 00:07:36.208 7965.145 - 8015.557: 90.9265% ( 77) 00:07:36.208 8015.557 - 8065.969: 91.2939% ( 67) 00:07:36.208 8065.969 - 8116.382: 91.5680% ( 50) 00:07:36.208 8116.382 - 8166.794: 91.8969% ( 60) 00:07:36.208 8166.794 - 8217.206: 92.3739% ( 87) 00:07:36.208 8217.206 - 8267.618: 92.5987% ( 41) 00:07:36.208 8267.618 - 8318.031: 92.8015% ( 37) 00:07:36.208 8318.031 - 8368.443: 93.0482% ( 45) 00:07:36.208 8368.443 - 8418.855: 93.3114% ( 48) 00:07:36.208 8418.855 - 8469.268: 93.6184% ( 56) 00:07:36.208 8469.268 - 8519.680: 93.8377% ( 40) 00:07:36.208 8519.680 - 8570.092: 93.9857% ( 27) 00:07:36.208 8570.092 - 8620.505: 94.1667% ( 33) 00:07:36.208 8620.505 - 8670.917: 94.3695% ( 37) 00:07:36.208 8670.917 - 8721.329: 94.6107% ( 44) 00:07:36.208 8721.329 - 8771.742: 94.8300% ( 40) 00:07:36.208 8771.742 - 8822.154: 94.9726% ( 26) 00:07:36.208 8822.154 - 8872.566: 95.3893% ( 76) 00:07:36.208 8872.566 - 8922.978: 95.6963% ( 56) 00:07:36.208 8922.978 - 8973.391: 95.8114% ( 21) 00:07:36.208 8973.391 - 9023.803: 95.8827% ( 13) 00:07:36.208 9023.803 - 9074.215: 95.9539% ( 13) 00:07:36.208 9074.215 - 9124.628: 96.0362% ( 15) 00:07:36.208 9124.628 - 9175.040: 96.1020% ( 12) 00:07:36.208 9175.040 - 9225.452: 96.1458% ( 8) 00:07:36.208 9225.452 - 9275.865: 96.1897% ( 8) 00:07:36.208 9275.865 - 9326.277: 96.2281% ( 7) 00:07:36.208 9326.277 - 9376.689: 96.2719% ( 8) 00:07:36.208 9376.689 - 9427.102: 96.3103% ( 7) 00:07:36.208 9427.102 - 9477.514: 96.3706% ( 11) 00:07:36.208 9477.514 - 9527.926: 96.4638% ( 17) 00:07:36.208 9527.926 - 9578.338: 96.5461% ( 15) 00:07:36.208 9578.338 - 9628.751: 96.6173% ( 13) 00:07:36.208 9628.751 - 9679.163: 96.7928% ( 32) 00:07:36.208 9679.163 - 9729.575: 96.8805% ( 16) 00:07:36.208 9729.575 - 9779.988: 96.9572% ( 14) 00:07:36.208 9779.988 - 9830.400: 97.0614% ( 19) 00:07:36.208 9830.400 - 9880.812: 97.1491% ( 16) 00:07:36.208 9880.812 - 9931.225: 97.2204% ( 13) 00:07:36.208 9931.225 - 9981.637: 97.2862% ( 12) 00:07:36.208 9981.637 - 10032.049: 97.3136% ( 5) 00:07:36.208 10032.049 - 10082.462: 97.3410% ( 5) 00:07:36.208 10082.462 - 10132.874: 97.3794% ( 7) 00:07:36.208 10132.874 - 10183.286: 97.4232% ( 8) 00:07:36.208 10183.286 - 10233.698: 97.4616% ( 7) 00:07:36.208 10233.698 - 10284.111: 97.5164% ( 10) 00:07:36.208 10284.111 - 10334.523: 97.5603% ( 8) 00:07:36.208 10334.523 - 10384.935: 97.5987% ( 7) 00:07:36.208 10384.935 - 10435.348: 97.6535% ( 10) 00:07:36.208 10435.348 - 10485.760: 97.6864% ( 6) 00:07:36.208 10485.760 - 10536.172: 97.7138% ( 5) 00:07:36.208 10536.172 - 10586.585: 97.7522% ( 7) 00:07:36.208 10586.585 - 10636.997: 97.7906% ( 7) 00:07:36.208 10636.997 - 10687.409: 97.8509% ( 11) 00:07:36.208 10687.409 - 10737.822: 97.9057% ( 10) 00:07:36.208 10737.822 - 10788.234: 97.9605% ( 10) 00:07:36.208 10788.234 - 10838.646: 98.1743% ( 39) 00:07:36.208 10838.646 - 10889.058: 98.2237% ( 9) 00:07:36.209 10889.058 - 10939.471: 98.2840% ( 11) 00:07:36.209 10939.471 - 10989.883: 98.3224% ( 7) 00:07:36.209 10989.883 - 11040.295: 98.3717% ( 9) 00:07:36.209 11040.295 - 11090.708: 98.4046% ( 6) 00:07:36.209 11090.708 - 11141.120: 98.4211% ( 3) 00:07:36.209 11141.120 - 11191.532: 98.4320% ( 2) 00:07:36.209 11191.532 - 11241.945: 98.4430% ( 2) 00:07:36.209 11241.945 - 11292.357: 98.4704% ( 5) 00:07:36.209 11292.357 - 11342.769: 98.5033% ( 6) 00:07:36.209 11342.769 - 11393.182: 98.5307% ( 5) 00:07:36.209 11393.182 - 11443.594: 98.5526% ( 4) 00:07:36.209 11443.594 - 11494.006: 98.5855% ( 6) 00:07:36.209 11494.006 - 11544.418: 98.5910% ( 1) 00:07:36.209 11544.418 - 11594.831: 98.5965% ( 1) 00:07:36.209 11947.717 - 11998.129: 98.6075% ( 2) 00:07:36.209 11998.129 - 12048.542: 98.6239% ( 3) 00:07:36.209 12048.542 - 12098.954: 98.6458% ( 4) 00:07:36.209 12098.954 - 12149.366: 98.6678% ( 4) 00:07:36.209 12149.366 - 12199.778: 98.6897% ( 4) 00:07:36.209 12199.778 - 12250.191: 98.7445% ( 10) 00:07:36.209 12250.191 - 12300.603: 98.7829% ( 7) 00:07:36.209 12300.603 - 12351.015: 98.8268% ( 8) 00:07:36.209 12351.015 - 12401.428: 98.8706% ( 8) 00:07:36.209 12401.428 - 12451.840: 98.9309% ( 11) 00:07:36.209 12451.840 - 12502.252: 99.0241% ( 17) 00:07:36.209 12502.252 - 12552.665: 99.0570% ( 6) 00:07:36.209 12552.665 - 12603.077: 99.0844% ( 5) 00:07:36.209 12603.077 - 12653.489: 99.1173% ( 6) 00:07:36.209 12653.489 - 12703.902: 99.1502% ( 6) 00:07:36.209 12703.902 - 12754.314: 99.1831% ( 6) 00:07:36.209 12754.314 - 12804.726: 99.2050% ( 4) 00:07:36.209 12804.726 - 12855.138: 99.2160% ( 2) 00:07:36.209 12855.138 - 12905.551: 99.2270% ( 2) 00:07:36.209 12905.551 - 13006.375: 99.2434% ( 3) 00:07:36.209 13006.375 - 13107.200: 99.2599% ( 3) 00:07:36.209 13107.200 - 13208.025: 99.2708% ( 2) 00:07:36.209 13208.025 - 13308.849: 99.2928% ( 4) 00:07:36.209 13308.849 - 13409.674: 99.2982% ( 1) 00:07:36.209 19156.677 - 19257.502: 99.3092% ( 2) 00:07:36.209 19257.502 - 19358.326: 99.3311% ( 4) 00:07:36.209 19358.326 - 19459.151: 99.3531% ( 4) 00:07:36.209 19459.151 - 19559.975: 99.3805% ( 5) 00:07:36.209 19559.975 - 19660.800: 99.4024% ( 4) 00:07:36.209 19660.800 - 19761.625: 99.4243% ( 4) 00:07:36.209 19761.625 - 19862.449: 99.4463% ( 4) 00:07:36.209 19862.449 - 19963.274: 99.4682% ( 4) 00:07:36.209 19963.274 - 20064.098: 99.4846% ( 3) 00:07:36.209 20064.098 - 20164.923: 99.5066% ( 4) 00:07:36.209 20164.923 - 20265.748: 99.5285% ( 4) 00:07:36.209 20265.748 - 20366.572: 99.5450% ( 3) 00:07:36.209 20366.572 - 20467.397: 99.5669% ( 4) 00:07:36.209 20467.397 - 20568.222: 99.5943% ( 5) 00:07:36.209 20568.222 - 20669.046: 99.6162% ( 4) 00:07:36.209 20669.046 - 20769.871: 99.6382% ( 4) 00:07:36.209 20769.871 - 20870.695: 99.6491% ( 2) 00:07:36.209 23592.960 - 23693.785: 99.6601% ( 2) 00:07:36.209 23693.785 - 23794.609: 99.6820% ( 4) 00:07:36.209 23794.609 - 23895.434: 99.7039% ( 4) 00:07:36.209 23895.434 - 23996.258: 99.7259% ( 4) 00:07:36.209 23996.258 - 24097.083: 99.7423% ( 3) 00:07:36.209 24097.083 - 24197.908: 99.7643% ( 4) 00:07:36.209 24197.908 - 24298.732: 99.7862% ( 4) 00:07:36.209 24298.732 - 24399.557: 99.8026% ( 3) 00:07:36.209 24399.557 - 24500.382: 99.8300% ( 5) 00:07:36.209 24500.382 - 24601.206: 99.8520% ( 4) 00:07:36.209 24601.206 - 24702.031: 99.8739% ( 4) 00:07:36.209 24702.031 - 24802.855: 99.8958% ( 4) 00:07:36.209 24802.855 - 24903.680: 99.9232% ( 5) 00:07:36.209 24903.680 - 25004.505: 99.9452% ( 4) 00:07:36.209 25004.505 - 25105.329: 99.9671% ( 4) 00:07:36.209 25105.329 - 25206.154: 99.9890% ( 4) 00:07:36.209 25206.154 - 25306.978: 100.0000% ( 2) 00:07:36.209 00:07:36.209 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:36.209 ============================================================================== 00:07:36.209 Range in us Cumulative IO count 00:07:36.209 5116.849 - 5142.055: 0.0055% ( 1) 00:07:36.209 5192.468 - 5217.674: 0.0109% ( 1) 00:07:36.209 5242.880 - 5268.086: 0.0328% ( 4) 00:07:36.209 5268.086 - 5293.292: 0.0492% ( 3) 00:07:36.209 5293.292 - 5318.498: 0.0983% ( 9) 00:07:36.209 5318.498 - 5343.705: 0.1366% ( 7) 00:07:36.209 5343.705 - 5368.911: 0.1858% ( 9) 00:07:36.209 5368.911 - 5394.117: 0.2513% ( 12) 00:07:36.209 5394.117 - 5419.323: 0.3387% ( 16) 00:07:36.209 5419.323 - 5444.529: 0.5791% ( 44) 00:07:36.209 5444.529 - 5469.735: 0.8031% ( 41) 00:07:36.209 5469.735 - 5494.942: 1.0708% ( 49) 00:07:36.209 5494.942 - 5520.148: 1.2511% ( 33) 00:07:36.209 5520.148 - 5545.354: 1.5406% ( 53) 00:07:36.209 5545.354 - 5570.560: 1.7974% ( 47) 00:07:36.209 5570.560 - 5595.766: 2.3875% ( 108) 00:07:36.209 5595.766 - 5620.972: 2.8409% ( 83) 00:07:36.209 5620.972 - 5646.178: 3.3490% ( 93) 00:07:36.209 5646.178 - 5671.385: 3.9062% ( 102) 00:07:36.209 5671.385 - 5696.591: 4.7367% ( 152) 00:07:36.209 5696.591 - 5721.797: 5.4851% ( 137) 00:07:36.209 5721.797 - 5747.003: 5.9604% ( 87) 00:07:36.209 5747.003 - 5772.209: 6.2937% ( 61) 00:07:36.209 5772.209 - 5797.415: 6.7799% ( 89) 00:07:36.209 5797.415 - 5822.622: 7.2498% ( 86) 00:07:36.209 5822.622 - 5847.828: 7.6541% ( 74) 00:07:36.209 5847.828 - 5873.034: 8.2605% ( 111) 00:07:36.209 5873.034 - 5898.240: 8.6812% ( 77) 00:07:36.209 5898.240 - 5923.446: 8.9106% ( 42) 00:07:36.209 5923.446 - 5948.652: 9.2275% ( 58) 00:07:36.209 5948.652 - 5973.858: 9.6700% ( 81) 00:07:36.209 5973.858 - 5999.065: 10.3147% ( 118) 00:07:36.209 5999.065 - 6024.271: 10.8665% ( 101) 00:07:36.209 6024.271 - 6049.477: 11.3527% ( 89) 00:07:36.209 6049.477 - 6074.683: 12.0192% ( 122) 00:07:36.209 6074.683 - 6099.889: 12.9425% ( 169) 00:07:36.209 6099.889 - 6125.095: 13.6309% ( 126) 00:07:36.209 6125.095 - 6150.302: 14.7837% ( 211) 00:07:36.209 6150.302 - 6175.508: 15.8381% ( 193) 00:07:36.209 6175.508 - 6200.714: 17.1984% ( 249) 00:07:36.209 6200.714 - 6225.920: 18.7445% ( 283) 00:07:36.209 6225.920 - 6251.126: 20.7113% ( 360) 00:07:36.209 6251.126 - 6276.332: 22.1700% ( 267) 00:07:36.209 6276.332 - 6301.538: 23.2408% ( 196) 00:07:36.209 6301.538 - 6326.745: 24.6230% ( 253) 00:07:36.209 6326.745 - 6351.951: 25.7867% ( 213) 00:07:36.209 6351.951 - 6377.157: 27.5076% ( 315) 00:07:36.209 6377.157 - 6402.363: 28.9500% ( 264) 00:07:36.209 6402.363 - 6427.569: 30.6873% ( 318) 00:07:36.209 6427.569 - 6452.775: 32.2771% ( 291) 00:07:36.209 6452.775 - 6503.188: 34.5771% ( 421) 00:07:36.209 6503.188 - 6553.600: 36.9591% ( 436) 00:07:36.209 6553.600 - 6604.012: 39.4285% ( 452) 00:07:36.209 6604.012 - 6654.425: 42.3678% ( 538) 00:07:36.209 6654.425 - 6704.837: 45.6949% ( 609) 00:07:36.209 6704.837 - 6755.249: 49.7651% ( 745) 00:07:36.209 6755.249 - 6805.662: 53.8516% ( 748) 00:07:36.209 6805.662 - 6856.074: 58.4572% ( 843) 00:07:36.209 6856.074 - 6906.486: 62.7076% ( 778) 00:07:36.209 6906.486 - 6956.898: 66.5811% ( 709) 00:07:36.209 6956.898 - 7007.311: 70.0339% ( 632) 00:07:36.209 7007.311 - 7057.723: 73.3610% ( 609) 00:07:36.209 7057.723 - 7108.135: 75.6337% ( 416) 00:07:36.209 7108.135 - 7158.548: 77.9556% ( 425) 00:07:36.209 7158.548 - 7208.960: 79.5892% ( 299) 00:07:36.209 7208.960 - 7259.372: 81.3265% ( 318) 00:07:36.209 7259.372 - 7309.785: 83.2332% ( 349) 00:07:36.209 7309.785 - 7360.197: 84.1346% ( 165) 00:07:36.209 7360.197 - 7410.609: 84.8066% ( 123) 00:07:36.209 7410.609 - 7461.022: 85.8719% ( 195) 00:07:36.209 7461.022 - 7511.434: 86.4019% ( 97) 00:07:36.209 7511.434 - 7561.846: 86.8389% ( 80) 00:07:36.209 7561.846 - 7612.258: 87.1230% ( 52) 00:07:36.209 7612.258 - 7662.671: 87.4344% ( 57) 00:07:36.209 7662.671 - 7713.083: 87.8497% ( 76) 00:07:36.209 7713.083 - 7763.495: 88.3632% ( 94) 00:07:36.209 7763.495 - 7813.908: 88.8112% ( 82) 00:07:36.209 7813.908 - 7864.320: 89.1772% ( 67) 00:07:36.209 7864.320 - 7914.732: 89.5760% ( 73) 00:07:36.209 7914.732 - 7965.145: 89.9803% ( 74) 00:07:36.209 7965.145 - 8015.557: 90.3191% ( 62) 00:07:36.209 8015.557 - 8065.969: 90.5431% ( 41) 00:07:36.209 8065.969 - 8116.382: 90.7288% ( 34) 00:07:36.209 8116.382 - 8166.794: 91.2205% ( 90) 00:07:36.209 8166.794 - 8217.206: 91.8324% ( 112) 00:07:36.209 8217.206 - 8267.618: 92.3405% ( 93) 00:07:36.209 8267.618 - 8318.031: 92.5809% ( 44) 00:07:36.209 8318.031 - 8368.443: 92.7666% ( 34) 00:07:36.209 8368.443 - 8418.855: 92.9851% ( 40) 00:07:36.209 8418.855 - 8469.268: 93.1982% ( 39) 00:07:36.209 8469.268 - 8519.680: 93.4604% ( 48) 00:07:36.209 8519.680 - 8570.092: 93.6571% ( 36) 00:07:36.209 8570.092 - 8620.505: 93.9139% ( 47) 00:07:36.209 8620.505 - 8670.917: 94.1488% ( 43) 00:07:36.209 8670.917 - 8721.329: 94.4165% ( 49) 00:07:36.209 8721.329 - 8771.742: 94.5859% ( 31) 00:07:36.209 8771.742 - 8822.154: 94.7935% ( 38) 00:07:36.209 8822.154 - 8872.566: 95.1104% ( 58) 00:07:36.209 8872.566 - 8922.978: 95.3781% ( 49) 00:07:36.209 8922.978 - 8973.391: 95.5911% ( 39) 00:07:36.209 8973.391 - 9023.803: 95.7386% ( 27) 00:07:36.209 9023.803 - 9074.215: 95.8861% ( 27) 00:07:36.209 9074.215 - 9124.628: 96.0555% ( 31) 00:07:36.209 9124.628 - 9175.040: 96.2850% ( 42) 00:07:36.209 9175.040 - 9225.452: 96.4871% ( 37) 00:07:36.209 9225.452 - 9275.865: 96.5472% ( 11) 00:07:36.209 9275.865 - 9326.277: 96.5854% ( 7) 00:07:36.209 9326.277 - 9376.689: 96.6401% ( 10) 00:07:36.209 9376.689 - 9427.102: 96.6838% ( 8) 00:07:36.209 9427.102 - 9477.514: 96.7166% ( 6) 00:07:36.209 9477.514 - 9527.926: 96.7493% ( 6) 00:07:36.209 9527.926 - 9578.338: 96.7767% ( 5) 00:07:36.209 9578.338 - 9628.751: 96.8040% ( 5) 00:07:36.209 9628.751 - 9679.163: 96.8313% ( 5) 00:07:36.209 9679.163 - 9729.575: 96.8531% ( 4) 00:07:36.209 9779.988 - 9830.400: 96.8805% ( 5) 00:07:36.209 9830.400 - 9880.812: 96.9515% ( 13) 00:07:36.209 9880.812 - 9931.225: 97.0170% ( 12) 00:07:36.209 9931.225 - 9981.637: 97.0553% ( 7) 00:07:36.209 9981.637 - 10032.049: 97.0935% ( 7) 00:07:36.209 10032.049 - 10082.462: 97.1208% ( 5) 00:07:36.209 10082.462 - 10132.874: 97.1591% ( 7) 00:07:36.209 10132.874 - 10183.286: 97.1973% ( 7) 00:07:36.209 10183.286 - 10233.698: 97.2629% ( 12) 00:07:36.210 10233.698 - 10284.111: 97.3339% ( 13) 00:07:36.210 10284.111 - 10334.523: 97.4049% ( 13) 00:07:36.210 10334.523 - 10384.935: 97.6399% ( 43) 00:07:36.210 10384.935 - 10435.348: 97.7000% ( 11) 00:07:36.210 10435.348 - 10485.760: 97.7655% ( 12) 00:07:36.210 10485.760 - 10536.172: 97.8201% ( 10) 00:07:36.210 10536.172 - 10586.585: 97.8693% ( 9) 00:07:36.210 10586.585 - 10636.997: 97.8857% ( 3) 00:07:36.210 10636.997 - 10687.409: 97.9021% ( 3) 00:07:36.210 11090.708 - 11141.120: 97.9130% ( 2) 00:07:36.210 11141.120 - 11191.532: 97.9240% ( 2) 00:07:36.210 11191.532 - 11241.945: 97.9349% ( 2) 00:07:36.210 11241.945 - 11292.357: 97.9458% ( 2) 00:07:36.210 11292.357 - 11342.769: 97.9622% ( 3) 00:07:36.210 11342.769 - 11393.182: 98.0059% ( 8) 00:07:36.210 11393.182 - 11443.594: 98.0496% ( 8) 00:07:36.210 11443.594 - 11494.006: 98.0988% ( 9) 00:07:36.210 11494.006 - 11544.418: 98.1534% ( 10) 00:07:36.210 11544.418 - 11594.831: 98.2190% ( 12) 00:07:36.210 11594.831 - 11645.243: 98.3009% ( 15) 00:07:36.210 11645.243 - 11695.655: 98.4757% ( 32) 00:07:36.210 11695.655 - 11746.068: 98.5850% ( 20) 00:07:36.210 11746.068 - 11796.480: 98.6670% ( 15) 00:07:36.210 11796.480 - 11846.892: 98.8199% ( 28) 00:07:36.210 11846.892 - 11897.305: 98.9019% ( 15) 00:07:36.210 11897.305 - 11947.717: 98.9620% ( 11) 00:07:36.210 11947.717 - 11998.129: 99.0166% ( 10) 00:07:36.210 11998.129 - 12048.542: 99.0494% ( 6) 00:07:36.210 12048.542 - 12098.954: 99.0822% ( 6) 00:07:36.210 12098.954 - 12149.366: 99.1040% ( 4) 00:07:36.210 12149.366 - 12199.778: 99.1368% ( 6) 00:07:36.210 12199.778 - 12250.191: 99.1641% ( 5) 00:07:36.210 12250.191 - 12300.603: 99.1969% ( 6) 00:07:36.210 12300.603 - 12351.015: 99.2242% ( 5) 00:07:36.210 12351.015 - 12401.428: 99.2570% ( 6) 00:07:36.210 12401.428 - 12451.840: 99.2898% ( 6) 00:07:36.210 12451.840 - 12502.252: 99.3007% ( 2) 00:07:36.210 13409.674 - 13510.498: 99.3226% ( 4) 00:07:36.210 13510.498 - 13611.323: 99.3444% ( 4) 00:07:36.210 13611.323 - 13712.148: 99.3663% ( 4) 00:07:36.210 13712.148 - 13812.972: 99.3881% ( 4) 00:07:36.210 13812.972 - 13913.797: 99.4100% ( 4) 00:07:36.210 13913.797 - 14014.622: 99.4373% ( 5) 00:07:36.210 14014.622 - 14115.446: 99.4591% ( 4) 00:07:36.210 14115.446 - 14216.271: 99.4810% ( 4) 00:07:36.210 14216.271 - 14317.095: 99.5028% ( 4) 00:07:36.210 14317.095 - 14417.920: 99.5302% ( 5) 00:07:36.210 14417.920 - 14518.745: 99.5520% ( 4) 00:07:36.210 14518.745 - 14619.569: 99.5739% ( 4) 00:07:36.210 14619.569 - 14720.394: 99.5957% ( 4) 00:07:36.210 14720.394 - 14821.218: 99.6176% ( 4) 00:07:36.210 14821.218 - 14922.043: 99.6449% ( 5) 00:07:36.210 14922.043 - 15022.868: 99.6503% ( 1) 00:07:36.210 18652.554 - 18753.378: 99.6722% ( 4) 00:07:36.210 18753.378 - 18854.203: 99.6941% ( 4) 00:07:36.210 18854.203 - 18955.028: 99.7159% ( 4) 00:07:36.210 18955.028 - 19055.852: 99.7378% ( 4) 00:07:36.210 19055.852 - 19156.677: 99.7596% ( 4) 00:07:36.210 19156.677 - 19257.502: 99.7815% ( 4) 00:07:36.210 19257.502 - 19358.326: 99.8033% ( 4) 00:07:36.210 19358.326 - 19459.151: 99.8252% ( 4) 00:07:36.210 19459.151 - 19559.975: 99.8470% ( 4) 00:07:36.210 19559.975 - 19660.800: 99.8689% ( 4) 00:07:36.210 19660.800 - 19761.625: 99.8907% ( 4) 00:07:36.210 19761.625 - 19862.449: 99.9071% ( 3) 00:07:36.210 19862.449 - 19963.274: 99.9290% ( 4) 00:07:36.210 19963.274 - 20064.098: 99.9563% ( 5) 00:07:36.210 20064.098 - 20164.923: 99.9781% ( 4) 00:07:36.210 20164.923 - 20265.748: 99.9945% ( 3) 00:07:36.210 20265.748 - 20366.572: 100.0000% ( 1) 00:07:36.210 00:07:36.210 ************************************ 00:07:36.210 END TEST nvme_perf 00:07:36.210 ************************************ 00:07:36.210 09:13:27 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:36.210 00:07:36.210 real 0m2.471s 00:07:36.210 user 0m2.187s 00:07:36.210 sys 0m0.183s 00:07:36.210 09:13:27 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.210 09:13:27 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:36.210 09:13:27 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:36.210 09:13:27 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:36.210 09:13:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.210 09:13:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.210 ************************************ 00:07:36.210 START TEST nvme_hello_world 00:07:36.210 ************************************ 00:07:36.210 09:13:27 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:36.210 Initializing NVMe Controllers 00:07:36.210 Attached to 0000:00:10.0 00:07:36.210 Namespace ID: 1 size: 6GB 00:07:36.210 Attached to 0000:00:11.0 00:07:36.210 Namespace ID: 1 size: 5GB 00:07:36.210 Attached to 0000:00:13.0 00:07:36.210 Namespace ID: 1 size: 1GB 00:07:36.210 Attached to 0000:00:12.0 00:07:36.210 Namespace ID: 1 size: 4GB 00:07:36.210 Namespace ID: 2 size: 4GB 00:07:36.210 Namespace ID: 3 size: 4GB 00:07:36.210 Initialization complete. 00:07:36.210 INFO: using host memory buffer for IO 00:07:36.210 Hello world! 00:07:36.210 INFO: using host memory buffer for IO 00:07:36.210 Hello world! 00:07:36.210 INFO: using host memory buffer for IO 00:07:36.210 Hello world! 00:07:36.210 INFO: using host memory buffer for IO 00:07:36.210 Hello world! 00:07:36.210 INFO: using host memory buffer for IO 00:07:36.210 Hello world! 00:07:36.210 INFO: using host memory buffer for IO 00:07:36.210 Hello world! 00:07:36.210 00:07:36.210 real 0m0.208s 00:07:36.210 user 0m0.075s 00:07:36.210 sys 0m0.089s 00:07:36.210 ************************************ 00:07:36.210 END TEST nvme_hello_world 00:07:36.210 ************************************ 00:07:36.210 09:13:27 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.210 09:13:27 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:36.210 09:13:27 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:36.210 09:13:27 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.210 09:13:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.210 09:13:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.210 ************************************ 00:07:36.210 START TEST nvme_sgl 00:07:36.210 ************************************ 00:07:36.210 09:13:27 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:36.469 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:36.469 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:36.469 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:36.469 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:36.469 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:36.469 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:36.469 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:36.469 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:36.469 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:36.469 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:36.469 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:36.469 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:36.469 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:36.469 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:36.469 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:36.469 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:36.469 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:36.469 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:36.469 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:36.469 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:36.469 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:36.469 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:36.469 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:36.469 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:36.469 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:36.469 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:36.469 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:36.469 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:36.469 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:36.469 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:36.469 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:36.469 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:36.469 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:36.469 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:36.469 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:36.469 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:36.469 NVMe Readv/Writev Request test 00:07:36.469 Attached to 0000:00:10.0 00:07:36.469 Attached to 0000:00:11.0 00:07:36.469 Attached to 0000:00:13.0 00:07:36.469 Attached to 0000:00:12.0 00:07:36.469 0000:00:10.0: build_io_request_2 test passed 00:07:36.469 0000:00:10.0: build_io_request_4 test passed 00:07:36.469 0000:00:10.0: build_io_request_5 test passed 00:07:36.469 0000:00:10.0: build_io_request_6 test passed 00:07:36.469 0000:00:10.0: build_io_request_7 test passed 00:07:36.469 0000:00:10.0: build_io_request_10 test passed 00:07:36.469 0000:00:11.0: build_io_request_2 test passed 00:07:36.469 0000:00:11.0: build_io_request_4 test passed 00:07:36.469 0000:00:11.0: build_io_request_5 test passed 00:07:36.469 0000:00:11.0: build_io_request_6 test passed 00:07:36.469 0000:00:11.0: build_io_request_7 test passed 00:07:36.469 0000:00:11.0: build_io_request_10 test passed 00:07:36.469 Cleaning up... 00:07:36.469 00:07:36.469 real 0m0.274s 00:07:36.469 user 0m0.140s 00:07:36.469 sys 0m0.087s 00:07:36.469 09:13:28 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.469 09:13:28 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:36.469 ************************************ 00:07:36.469 END TEST nvme_sgl 00:07:36.469 ************************************ 00:07:36.469 09:13:28 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:36.469 09:13:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.469 09:13:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.469 09:13:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.469 ************************************ 00:07:36.469 START TEST nvme_e2edp 00:07:36.469 ************************************ 00:07:36.469 09:13:28 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:36.727 NVMe Write/Read with End-to-End data protection test 00:07:36.727 Attached to 0000:00:10.0 00:07:36.727 Attached to 0000:00:11.0 00:07:36.727 Attached to 0000:00:13.0 00:07:36.727 Attached to 0000:00:12.0 00:07:36.727 Cleaning up... 00:07:36.727 00:07:36.727 real 0m0.193s 00:07:36.727 user 0m0.057s 00:07:36.727 sys 0m0.093s 00:07:36.727 09:13:28 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.727 09:13:28 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:36.727 ************************************ 00:07:36.727 END TEST nvme_e2edp 00:07:36.727 ************************************ 00:07:36.727 09:13:28 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:36.727 09:13:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.727 09:13:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.727 09:13:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.727 ************************************ 00:07:36.727 START TEST nvme_reserve 00:07:36.727 ************************************ 00:07:36.727 09:13:28 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:36.985 ===================================================== 00:07:36.985 NVMe Controller at PCI bus 0, device 16, function 0 00:07:36.985 ===================================================== 00:07:36.985 Reservations: Not Supported 00:07:36.985 ===================================================== 00:07:36.985 NVMe Controller at PCI bus 0, device 17, function 0 00:07:36.985 ===================================================== 00:07:36.985 Reservations: Not Supported 00:07:36.985 ===================================================== 00:07:36.985 NVMe Controller at PCI bus 0, device 19, function 0 00:07:36.985 ===================================================== 00:07:36.985 Reservations: Not Supported 00:07:36.985 ===================================================== 00:07:36.985 NVMe Controller at PCI bus 0, device 18, function 0 00:07:36.985 ===================================================== 00:07:36.985 Reservations: Not Supported 00:07:36.985 Reservation test passed 00:07:36.985 00:07:36.985 real 0m0.200s 00:07:36.985 user 0m0.059s 00:07:36.985 sys 0m0.099s 00:07:36.985 09:13:28 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.985 09:13:28 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:36.985 ************************************ 00:07:36.985 END TEST nvme_reserve 00:07:36.985 ************************************ 00:07:36.985 09:13:28 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:36.985 09:13:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.985 09:13:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.985 09:13:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.985 ************************************ 00:07:36.985 START TEST nvme_err_injection 00:07:36.985 ************************************ 00:07:36.985 09:13:28 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:37.242 NVMe Error Injection test 00:07:37.242 Attached to 0000:00:10.0 00:07:37.242 Attached to 0000:00:11.0 00:07:37.242 Attached to 0000:00:13.0 00:07:37.242 Attached to 0000:00:12.0 00:07:37.242 0000:00:10.0: get features failed as expected 00:07:37.242 0000:00:11.0: get features failed as expected 00:07:37.242 0000:00:13.0: get features failed as expected 00:07:37.242 0000:00:12.0: get features failed as expected 00:07:37.242 0000:00:10.0: get features successfully as expected 00:07:37.242 0000:00:11.0: get features successfully as expected 00:07:37.242 0000:00:13.0: get features successfully as expected 00:07:37.242 0000:00:12.0: get features successfully as expected 00:07:37.242 0000:00:10.0: read failed as expected 00:07:37.242 0000:00:11.0: read failed as expected 00:07:37.242 0000:00:13.0: read failed as expected 00:07:37.242 0000:00:12.0: read failed as expected 00:07:37.242 0000:00:10.0: read successfully as expected 00:07:37.242 0000:00:11.0: read successfully as expected 00:07:37.242 0000:00:13.0: read successfully as expected 00:07:37.242 0000:00:12.0: read successfully as expected 00:07:37.242 Cleaning up... 00:07:37.242 00:07:37.242 real 0m0.207s 00:07:37.242 user 0m0.069s 00:07:37.242 sys 0m0.097s 00:07:37.242 09:13:28 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.242 ************************************ 00:07:37.242 END TEST nvme_err_injection 00:07:37.242 ************************************ 00:07:37.242 09:13:28 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 09:13:28 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:37.242 09:13:28 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:37.242 09:13:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.242 09:13:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.242 ************************************ 00:07:37.242 START TEST nvme_overhead 00:07:37.242 ************************************ 00:07:37.242 09:13:28 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:38.618 Initializing NVMe Controllers 00:07:38.618 Attached to 0000:00:10.0 00:07:38.618 Attached to 0000:00:11.0 00:07:38.618 Attached to 0000:00:13.0 00:07:38.618 Attached to 0000:00:12.0 00:07:38.618 Initialization complete. Launching workers. 00:07:38.618 submit (in ns) avg, min, max = 11362.0, 9866.9, 994130.0 00:07:38.618 complete (in ns) avg, min, max = 7575.3, 7213.8, 240776.9 00:07:38.618 00:07:38.618 Submit histogram 00:07:38.618 ================ 00:07:38.618 Range in us Cumulative Count 00:07:38.618 9.846 - 9.895: 0.0056% ( 1) 00:07:38.618 9.945 - 9.994: 0.0111% ( 1) 00:07:38.618 10.289 - 10.338: 0.0167% ( 1) 00:07:38.618 10.732 - 10.782: 0.0278% ( 2) 00:07:38.618 10.782 - 10.831: 0.0389% ( 2) 00:07:38.618 10.831 - 10.880: 0.3169% ( 50) 00:07:38.618 10.880 - 10.929: 1.9293% ( 290) 00:07:38.618 10.929 - 10.978: 8.6512% ( 1209) 00:07:38.618 10.978 - 11.028: 24.3856% ( 2830) 00:07:38.618 11.028 - 11.077: 45.6800% ( 3830) 00:07:38.618 11.077 - 11.126: 63.7329% ( 3247) 00:07:38.618 11.126 - 11.175: 74.9694% ( 2021) 00:07:38.618 11.175 - 11.225: 81.5801% ( 1189) 00:07:38.618 11.225 - 11.274: 85.0940% ( 632) 00:07:38.618 11.274 - 11.323: 87.0955% ( 360) 00:07:38.618 11.323 - 11.372: 88.3910% ( 233) 00:07:38.618 11.372 - 11.422: 89.2806% ( 160) 00:07:38.618 11.422 - 11.471: 90.0589% ( 140) 00:07:38.618 11.471 - 11.520: 90.6650% ( 109) 00:07:38.618 11.520 - 11.569: 91.2710% ( 109) 00:07:38.618 11.569 - 11.618: 91.8214% ( 99) 00:07:38.618 11.618 - 11.668: 92.2773% ( 82) 00:07:38.618 11.668 - 11.717: 92.7332% ( 82) 00:07:38.618 11.717 - 11.766: 93.2225% ( 88) 00:07:38.618 11.766 - 11.815: 93.6284% ( 73) 00:07:38.618 11.815 - 11.865: 93.9953% ( 66) 00:07:38.618 11.865 - 11.914: 94.3289% ( 60) 00:07:38.618 11.914 - 11.963: 94.6069% ( 50) 00:07:38.618 11.963 - 12.012: 94.8293% ( 40) 00:07:38.618 12.012 - 12.062: 95.0183% ( 34) 00:07:38.618 12.062 - 12.111: 95.2908% ( 49) 00:07:38.618 12.111 - 12.160: 95.5855% ( 53) 00:07:38.618 12.160 - 12.209: 95.8634% ( 50) 00:07:38.618 12.209 - 12.258: 96.0580% ( 35) 00:07:38.618 12.258 - 12.308: 96.2471% ( 34) 00:07:38.618 12.308 - 12.357: 96.4472% ( 36) 00:07:38.618 12.357 - 12.406: 96.5696% ( 22) 00:07:38.618 12.406 - 12.455: 96.6752% ( 19) 00:07:38.618 12.455 - 12.505: 96.7808% ( 19) 00:07:38.618 12.505 - 12.554: 96.8475% ( 12) 00:07:38.618 12.554 - 12.603: 96.9143% ( 12) 00:07:38.618 12.603 - 12.702: 96.9587% ( 8) 00:07:38.618 12.702 - 12.800: 96.9699% ( 2) 00:07:38.618 12.800 - 12.898: 96.9977% ( 5) 00:07:38.618 12.898 - 12.997: 97.0310% ( 6) 00:07:38.618 12.997 - 13.095: 97.1033% ( 13) 00:07:38.618 13.095 - 13.194: 97.2256% ( 22) 00:07:38.618 13.194 - 13.292: 97.3535% ( 23) 00:07:38.618 13.292 - 13.391: 97.4703% ( 21) 00:07:38.618 13.391 - 13.489: 97.5759% ( 19) 00:07:38.618 13.489 - 13.588: 97.6704% ( 17) 00:07:38.618 13.588 - 13.686: 97.7204% ( 9) 00:07:38.618 13.686 - 13.785: 97.7816% ( 11) 00:07:38.618 13.785 - 13.883: 97.8372% ( 10) 00:07:38.618 13.883 - 13.982: 97.8539% ( 3) 00:07:38.618 13.982 - 14.080: 97.8706% ( 3) 00:07:38.618 14.080 - 14.178: 97.8984% ( 5) 00:07:38.618 14.178 - 14.277: 97.9095% ( 2) 00:07:38.618 14.277 - 14.375: 97.9373% ( 5) 00:07:38.618 14.375 - 14.474: 97.9428% ( 1) 00:07:38.618 14.474 - 14.572: 97.9540% ( 2) 00:07:38.618 14.572 - 14.671: 97.9818% ( 5) 00:07:38.618 14.769 - 14.868: 98.0040% ( 4) 00:07:38.618 14.868 - 14.966: 98.0096% ( 1) 00:07:38.618 14.966 - 15.065: 98.0318% ( 4) 00:07:38.618 15.065 - 15.163: 98.0540% ( 4) 00:07:38.618 15.163 - 15.262: 98.0707% ( 3) 00:07:38.618 15.262 - 15.360: 98.0930% ( 4) 00:07:38.618 15.360 - 15.458: 98.1208% ( 5) 00:07:38.618 15.458 - 15.557: 98.1541% ( 6) 00:07:38.618 15.557 - 15.655: 98.1652% ( 2) 00:07:38.618 15.655 - 15.754: 98.1875% ( 4) 00:07:38.618 15.754 - 15.852: 98.2097% ( 4) 00:07:38.618 15.852 - 15.951: 98.2486% ( 7) 00:07:38.618 15.951 - 16.049: 98.2709% ( 4) 00:07:38.618 16.049 - 16.148: 98.2820% ( 2) 00:07:38.618 16.148 - 16.246: 98.2876% ( 1) 00:07:38.618 16.246 - 16.345: 98.2987% ( 2) 00:07:38.618 16.345 - 16.443: 98.3209% ( 4) 00:07:38.618 16.443 - 16.542: 98.3376% ( 3) 00:07:38.618 16.542 - 16.640: 98.4210% ( 15) 00:07:38.618 16.640 - 16.738: 98.5822% ( 29) 00:07:38.618 16.738 - 16.837: 98.7212% ( 25) 00:07:38.618 16.837 - 16.935: 98.8880% ( 30) 00:07:38.618 16.935 - 17.034: 98.9881% ( 18) 00:07:38.618 17.034 - 17.132: 99.0493% ( 11) 00:07:38.618 17.132 - 17.231: 99.1327% ( 15) 00:07:38.618 17.231 - 17.329: 99.2216% ( 16) 00:07:38.618 17.329 - 17.428: 99.2939% ( 13) 00:07:38.618 17.428 - 17.526: 99.3384% ( 8) 00:07:38.618 17.526 - 17.625: 99.4162% ( 14) 00:07:38.618 17.625 - 17.723: 99.4718% ( 10) 00:07:38.618 17.723 - 17.822: 99.5052% ( 6) 00:07:38.618 17.822 - 17.920: 99.5274% ( 4) 00:07:38.618 17.920 - 18.018: 99.5608% ( 6) 00:07:38.618 18.018 - 18.117: 99.6108% ( 9) 00:07:38.618 18.117 - 18.215: 99.6386% ( 5) 00:07:38.618 18.215 - 18.314: 99.6442% ( 1) 00:07:38.618 18.314 - 18.412: 99.6553% ( 2) 00:07:38.618 18.412 - 18.511: 99.6608% ( 1) 00:07:38.618 18.511 - 18.609: 99.6775% ( 3) 00:07:38.618 18.609 - 18.708: 99.6942% ( 3) 00:07:38.618 18.708 - 18.806: 99.6998% ( 1) 00:07:38.618 18.806 - 18.905: 99.7109% ( 2) 00:07:38.618 18.905 - 19.003: 99.7220% ( 2) 00:07:38.618 19.102 - 19.200: 99.7276% ( 1) 00:07:38.618 19.298 - 19.397: 99.7387% ( 2) 00:07:38.618 19.397 - 19.495: 99.7442% ( 1) 00:07:38.618 19.495 - 19.594: 99.7498% ( 1) 00:07:38.618 19.594 - 19.692: 99.7609% ( 2) 00:07:38.618 19.791 - 19.889: 99.7720% ( 2) 00:07:38.618 19.988 - 20.086: 99.7832% ( 2) 00:07:38.618 20.086 - 20.185: 99.7943% ( 2) 00:07:38.618 20.185 - 20.283: 99.8054% ( 2) 00:07:38.618 20.480 - 20.578: 99.8110% ( 1) 00:07:38.618 21.071 - 21.169: 99.8221% ( 2) 00:07:38.618 21.465 - 21.563: 99.8276% ( 1) 00:07:38.618 21.858 - 21.957: 99.8332% ( 1) 00:07:38.618 21.957 - 22.055: 99.8443% ( 2) 00:07:38.618 22.055 - 22.154: 99.8499% ( 1) 00:07:38.618 22.449 - 22.548: 99.8610% ( 2) 00:07:38.618 22.745 - 22.843: 99.8666% ( 1) 00:07:38.618 23.040 - 23.138: 99.8721% ( 1) 00:07:38.618 23.138 - 23.237: 99.8777% ( 1) 00:07:38.618 23.237 - 23.335: 99.8832% ( 1) 00:07:38.618 23.434 - 23.532: 99.8888% ( 1) 00:07:38.618 23.828 - 23.926: 99.8944% ( 1) 00:07:38.618 24.025 - 24.123: 99.8999% ( 1) 00:07:38.618 25.206 - 25.403: 99.9055% ( 1) 00:07:38.618 25.994 - 26.191: 99.9110% ( 1) 00:07:38.618 26.191 - 26.388: 99.9166% ( 1) 00:07:38.618 26.782 - 26.978: 99.9222% ( 1) 00:07:38.618 27.372 - 27.569: 99.9277% ( 1) 00:07:38.618 27.766 - 27.963: 99.9333% ( 1) 00:07:38.618 27.963 - 28.160: 99.9388% ( 1) 00:07:38.618 28.751 - 28.948: 99.9444% ( 1) 00:07:38.618 29.932 - 30.129: 99.9500% ( 1) 00:07:38.618 30.129 - 30.326: 99.9555% ( 1) 00:07:38.618 30.720 - 30.917: 99.9611% ( 1) 00:07:38.618 31.508 - 31.705: 99.9666% ( 1) 00:07:38.618 34.265 - 34.462: 99.9722% ( 1) 00:07:38.618 37.415 - 37.612: 99.9778% ( 1) 00:07:38.618 41.157 - 41.354: 99.9833% ( 1) 00:07:38.618 41.748 - 41.945: 99.9889% ( 1) 00:07:38.618 54.351 - 54.745: 99.9944% ( 1) 00:07:38.618 989.342 - 995.643: 100.0000% ( 1) 00:07:38.618 00:07:38.618 Complete histogram 00:07:38.618 ================== 00:07:38.618 Range in us Cumulative Count 00:07:38.618 7.188 - 7.237: 0.0445% ( 8) 00:07:38.618 7.237 - 7.286: 0.9897% ( 170) 00:07:38.619 7.286 - 7.335: 7.9061% ( 1244) 00:07:38.619 7.335 - 7.385: 26.8653% ( 3410) 00:07:38.619 7.385 - 7.434: 53.1580% ( 4729) 00:07:38.619 7.434 - 7.483: 75.0250% ( 3933) 00:07:38.619 7.483 - 7.532: 86.7063% ( 2101) 00:07:38.619 7.532 - 7.582: 92.4719% ( 1037) 00:07:38.619 7.582 - 7.631: 95.0295% ( 460) 00:07:38.619 7.631 - 7.680: 96.3972% ( 246) 00:07:38.619 7.680 - 7.729: 97.1422% ( 134) 00:07:38.619 7.729 - 7.778: 97.5147% ( 67) 00:07:38.619 7.778 - 7.828: 97.7649% ( 45) 00:07:38.619 7.828 - 7.877: 97.9095% ( 26) 00:07:38.619 7.877 - 7.926: 97.9595% ( 9) 00:07:38.619 7.926 - 7.975: 97.9762% ( 3) 00:07:38.619 7.975 - 8.025: 98.0151% ( 7) 00:07:38.619 8.025 - 8.074: 98.0652% ( 9) 00:07:38.619 8.074 - 8.123: 98.0985% ( 6) 00:07:38.619 8.123 - 8.172: 98.1541% ( 10) 00:07:38.619 8.172 - 8.222: 98.2097% ( 10) 00:07:38.619 8.222 - 8.271: 98.2542% ( 8) 00:07:38.619 8.271 - 8.320: 98.2820% ( 5) 00:07:38.619 8.320 - 8.369: 98.3098% ( 5) 00:07:38.619 8.369 - 8.418: 98.3209% ( 2) 00:07:38.619 8.418 - 8.468: 98.3432% ( 4) 00:07:38.619 8.517 - 8.566: 98.3543% ( 2) 00:07:38.619 8.615 - 8.665: 98.3598% ( 1) 00:07:38.619 8.714 - 8.763: 98.3654% ( 1) 00:07:38.619 8.911 - 8.960: 98.3710% ( 1) 00:07:38.619 9.255 - 9.305: 98.3765% ( 1) 00:07:38.619 9.354 - 9.403: 98.3821% ( 1) 00:07:38.619 9.502 - 9.551: 98.3932% ( 2) 00:07:38.619 9.600 - 9.649: 98.4043% ( 2) 00:07:38.619 9.748 - 9.797: 98.4099% ( 1) 00:07:38.619 9.945 - 9.994: 98.4154% ( 1) 00:07:38.619 9.994 - 10.043: 98.4210% ( 1) 00:07:38.619 10.043 - 10.092: 98.4266% ( 1) 00:07:38.619 10.191 - 10.240: 98.4321% ( 1) 00:07:38.619 10.338 - 10.388: 98.4432% ( 2) 00:07:38.619 10.437 - 10.486: 98.4488% ( 1) 00:07:38.619 10.535 - 10.585: 98.4544% ( 1) 00:07:38.619 10.634 - 10.683: 98.4599% ( 1) 00:07:38.619 10.683 - 10.732: 98.4655% ( 1) 00:07:38.619 10.831 - 10.880: 98.4710% ( 1) 00:07:38.619 10.880 - 10.929: 98.4822% ( 2) 00:07:38.619 10.929 - 10.978: 98.4877% ( 1) 00:07:38.619 10.978 - 11.028: 98.4933% ( 1) 00:07:38.619 11.077 - 11.126: 98.4988% ( 1) 00:07:38.619 11.126 - 11.175: 98.5100% ( 2) 00:07:38.619 11.175 - 11.225: 98.5155% ( 1) 00:07:38.619 11.225 - 11.274: 98.5211% ( 1) 00:07:38.619 11.274 - 11.323: 98.5266% ( 1) 00:07:38.619 11.323 - 11.372: 98.5322% ( 1) 00:07:38.619 11.618 - 11.668: 98.5378% ( 1) 00:07:38.619 11.668 - 11.717: 98.5433% ( 1) 00:07:38.619 12.308 - 12.357: 98.5544% ( 2) 00:07:38.619 12.406 - 12.455: 98.5600% ( 1) 00:07:38.619 12.603 - 12.702: 98.5656% ( 1) 00:07:38.619 12.702 - 12.800: 98.5767% ( 2) 00:07:38.619 12.800 - 12.898: 98.5989% ( 4) 00:07:38.619 12.898 - 12.997: 98.6489% ( 9) 00:07:38.619 12.997 - 13.095: 98.7268% ( 14) 00:07:38.619 13.095 - 13.194: 98.8213% ( 17) 00:07:38.619 13.194 - 13.292: 98.9047% ( 15) 00:07:38.619 13.292 - 13.391: 99.0270% ( 22) 00:07:38.619 13.391 - 13.489: 99.1327% ( 19) 00:07:38.619 13.489 - 13.588: 99.2272% ( 17) 00:07:38.619 13.588 - 13.686: 99.2939% ( 12) 00:07:38.619 13.686 - 13.785: 99.3940% ( 18) 00:07:38.619 13.785 - 13.883: 99.4996% ( 19) 00:07:38.619 13.883 - 13.982: 99.5441% ( 8) 00:07:38.619 13.982 - 14.080: 99.5886% ( 8) 00:07:38.619 14.080 - 14.178: 99.6775% ( 16) 00:07:38.619 14.178 - 14.277: 99.6942% ( 3) 00:07:38.619 14.277 - 14.375: 99.7109% ( 3) 00:07:38.619 14.375 - 14.474: 99.7220% ( 2) 00:07:38.619 14.474 - 14.572: 99.7276% ( 1) 00:07:38.619 14.572 - 14.671: 99.7331% ( 1) 00:07:38.619 14.671 - 14.769: 99.7498% ( 3) 00:07:38.619 14.769 - 14.868: 99.7720% ( 4) 00:07:38.619 14.966 - 15.065: 99.7776% ( 1) 00:07:38.619 15.360 - 15.458: 99.7887% ( 2) 00:07:38.619 15.458 - 15.557: 99.7943% ( 1) 00:07:38.619 15.754 - 15.852: 99.7998% ( 1) 00:07:38.619 15.951 - 16.049: 99.8054% ( 1) 00:07:38.619 16.049 - 16.148: 99.8110% ( 1) 00:07:38.619 16.443 - 16.542: 99.8165% ( 1) 00:07:38.619 16.837 - 16.935: 99.8276% ( 2) 00:07:38.619 16.935 - 17.034: 99.8332% ( 1) 00:07:38.619 17.231 - 17.329: 99.8443% ( 2) 00:07:38.619 17.428 - 17.526: 99.8554% ( 2) 00:07:38.619 17.625 - 17.723: 99.8610% ( 1) 00:07:38.619 17.723 - 17.822: 99.8721% ( 2) 00:07:38.619 17.920 - 18.018: 99.8777% ( 1) 00:07:38.619 18.117 - 18.215: 99.8832% ( 1) 00:07:38.619 18.314 - 18.412: 99.8888% ( 1) 00:07:38.619 18.806 - 18.905: 99.8944% ( 1) 00:07:38.619 19.200 - 19.298: 99.8999% ( 1) 00:07:38.619 19.692 - 19.791: 99.9055% ( 1) 00:07:38.619 19.791 - 19.889: 99.9166% ( 2) 00:07:38.619 20.185 - 20.283: 99.9277% ( 2) 00:07:38.619 20.382 - 20.480: 99.9333% ( 1) 00:07:38.619 22.745 - 22.843: 99.9388% ( 1) 00:07:38.619 24.025 - 24.123: 99.9444% ( 1) 00:07:38.619 24.812 - 24.911: 99.9500% ( 1) 00:07:38.619 30.326 - 30.523: 99.9555% ( 1) 00:07:38.619 38.203 - 38.400: 99.9611% ( 1) 00:07:38.619 50.412 - 50.806: 99.9666% ( 1) 00:07:38.619 51.594 - 51.988: 99.9722% ( 1) 00:07:38.619 53.563 - 53.957: 99.9778% ( 1) 00:07:38.619 65.772 - 66.166: 99.9833% ( 1) 00:07:38.619 88.222 - 88.615: 99.9889% ( 1) 00:07:38.619 93.735 - 94.129: 99.9944% ( 1) 00:07:38.619 239.458 - 241.034: 100.0000% ( 1) 00:07:38.619 00:07:38.619 ************************************ 00:07:38.619 END TEST nvme_overhead 00:07:38.619 ************************************ 00:07:38.619 00:07:38.619 real 0m1.212s 00:07:38.619 user 0m1.064s 00:07:38.619 sys 0m0.094s 00:07:38.619 09:13:30 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.619 09:13:30 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:38.619 09:13:30 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:38.619 09:13:30 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:38.619 09:13:30 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.619 09:13:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.619 ************************************ 00:07:38.619 START TEST nvme_arbitration 00:07:38.619 ************************************ 00:07:38.619 09:13:30 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:41.900 Initializing NVMe Controllers 00:07:41.900 Attached to 0000:00:10.0 00:07:41.900 Attached to 0000:00:11.0 00:07:41.900 Attached to 0000:00:13.0 00:07:41.900 Attached to 0000:00:12.0 00:07:41.900 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:41.900 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:41.900 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:41.900 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:41.900 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:41.900 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:41.900 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:41.900 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:41.900 Initialization complete. Launching workers. 00:07:41.900 Starting thread on core 1 with urgent priority queue 00:07:41.900 Starting thread on core 2 with urgent priority queue 00:07:41.900 Starting thread on core 3 with urgent priority queue 00:07:41.900 Starting thread on core 0 with urgent priority queue 00:07:41.900 QEMU NVMe Ctrl (12340 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:41.900 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:41.900 QEMU NVMe Ctrl (12341 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:07:41.900 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:07:41.900 QEMU NVMe Ctrl (12343 ) core 2: 917.33 IO/s 109.01 secs/100000 ios 00:07:41.900 QEMU NVMe Ctrl (12342 ) core 3: 938.67 IO/s 106.53 secs/100000 ios 00:07:41.900 ======================================================== 00:07:41.900 00:07:41.900 00:07:41.900 real 0m3.274s 00:07:41.900 user 0m9.187s 00:07:41.900 sys 0m0.108s 00:07:41.900 ************************************ 00:07:41.900 END TEST nvme_arbitration 00:07:41.900 ************************************ 00:07:41.900 09:13:33 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.900 09:13:33 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:41.900 09:13:33 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:41.900 09:13:33 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:41.900 09:13:33 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.900 09:13:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.900 ************************************ 00:07:41.900 START TEST nvme_single_aen 00:07:41.900 ************************************ 00:07:41.900 09:13:33 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:41.900 Asynchronous Event Request test 00:07:41.900 Attached to 0000:00:10.0 00:07:41.900 Attached to 0000:00:11.0 00:07:41.900 Attached to 0000:00:13.0 00:07:41.900 Attached to 0000:00:12.0 00:07:41.900 Reset controller to setup AER completions for this process 00:07:41.900 Registering asynchronous event callbacks... 00:07:41.900 Getting orig temperature thresholds of all controllers 00:07:41.900 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.900 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.900 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.900 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.900 Setting all controllers temperature threshold low to trigger AER 00:07:41.900 Waiting for all controllers temperature threshold to be set lower 00:07:41.900 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.900 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:41.900 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.900 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:41.900 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.900 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:41.900 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.900 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:41.900 Waiting for all controllers to trigger AER and reset threshold 00:07:41.900 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.900 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.900 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.900 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.900 Cleaning up... 00:07:42.158 00:07:42.158 real 0m0.186s 00:07:42.158 user 0m0.057s 00:07:42.158 sys 0m0.096s 00:07:42.158 ************************************ 00:07:42.158 END TEST nvme_single_aen 00:07:42.158 ************************************ 00:07:42.158 09:13:33 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.158 09:13:33 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:42.158 09:13:33 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:42.158 09:13:33 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.158 09:13:33 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.158 09:13:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.158 ************************************ 00:07:42.158 START TEST nvme_doorbell_aers 00:07:42.158 ************************************ 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:42.158 09:13:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:42.416 [2024-10-08 09:13:33.919366] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:07:52.378 Executing: test_write_invalid_db 00:07:52.379 Waiting for AER completion... 00:07:52.379 Failure: test_write_invalid_db 00:07:52.379 00:07:52.379 Executing: test_invalid_db_write_overflow_sq 00:07:52.379 Waiting for AER completion... 00:07:52.379 Failure: test_invalid_db_write_overflow_sq 00:07:52.379 00:07:52.379 Executing: test_invalid_db_write_overflow_cq 00:07:52.379 Waiting for AER completion... 00:07:52.379 Failure: test_invalid_db_write_overflow_cq 00:07:52.379 00:07:52.379 09:13:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:52.379 09:13:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:52.379 [2024-10-08 09:13:43.936278] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:02.359 Executing: test_write_invalid_db 00:08:02.359 Waiting for AER completion... 00:08:02.359 Failure: test_write_invalid_db 00:08:02.359 00:08:02.359 Executing: test_invalid_db_write_overflow_sq 00:08:02.359 Waiting for AER completion... 00:08:02.359 Failure: test_invalid_db_write_overflow_sq 00:08:02.359 00:08:02.359 Executing: test_invalid_db_write_overflow_cq 00:08:02.359 Waiting for AER completion... 00:08:02.359 Failure: test_invalid_db_write_overflow_cq 00:08:02.359 00:08:02.359 09:13:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:02.359 09:13:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:02.359 [2024-10-08 09:13:53.964461] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:12.324 Executing: test_write_invalid_db 00:08:12.324 Waiting for AER completion... 00:08:12.324 Failure: test_write_invalid_db 00:08:12.324 00:08:12.324 Executing: test_invalid_db_write_overflow_sq 00:08:12.324 Waiting for AER completion... 00:08:12.324 Failure: test_invalid_db_write_overflow_sq 00:08:12.324 00:08:12.324 Executing: test_invalid_db_write_overflow_cq 00:08:12.324 Waiting for AER completion... 00:08:12.324 Failure: test_invalid_db_write_overflow_cq 00:08:12.324 00:08:12.324 09:14:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:12.324 09:14:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:12.582 [2024-10-08 09:14:04.017762] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.551 Executing: test_write_invalid_db 00:08:22.551 Waiting for AER completion... 00:08:22.551 Failure: test_write_invalid_db 00:08:22.551 00:08:22.551 Executing: test_invalid_db_write_overflow_sq 00:08:22.551 Waiting for AER completion... 00:08:22.551 Failure: test_invalid_db_write_overflow_sq 00:08:22.551 00:08:22.551 Executing: test_invalid_db_write_overflow_cq 00:08:22.551 Waiting for AER completion... 00:08:22.551 Failure: test_invalid_db_write_overflow_cq 00:08:22.551 00:08:22.551 00:08:22.551 real 0m40.194s 00:08:22.551 user 0m34.207s 00:08:22.551 sys 0m5.607s 00:08:22.551 09:14:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.551 09:14:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:22.551 ************************************ 00:08:22.551 END TEST nvme_doorbell_aers 00:08:22.551 ************************************ 00:08:22.551 09:14:13 nvme -- nvme/nvme.sh@97 -- # uname 00:08:22.551 09:14:13 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:22.551 09:14:13 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:22.551 09:14:13 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:22.551 09:14:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.551 09:14:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.551 ************************************ 00:08:22.551 START TEST nvme_multi_aen 00:08:22.551 ************************************ 00:08:22.551 09:14:13 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:22.551 [2024-10-08 09:14:14.064753] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.551 [2024-10-08 09:14:14.064979] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.551 [2024-10-08 09:14:14.065043] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.552 [2024-10-08 09:14:14.066353] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.552 [2024-10-08 09:14:14.066488] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.552 [2024-10-08 09:14:14.066543] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.552 [2024-10-08 09:14:14.067522] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.552 [2024-10-08 09:14:14.067609] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.552 [2024-10-08 09:14:14.067655] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.552 [2024-10-08 09:14:14.068582] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.552 [2024-10-08 09:14:14.068664] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.552 [2024-10-08 09:14:14.068711] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63587) is not found. Dropping the request. 00:08:22.552 Child process pid: 64108 00:08:22.810 [Child] Asynchronous Event Request test 00:08:22.810 [Child] Attached to 0000:00:10.0 00:08:22.810 [Child] Attached to 0000:00:11.0 00:08:22.810 [Child] Attached to 0000:00:13.0 00:08:22.810 [Child] Attached to 0000:00:12.0 00:08:22.810 [Child] Registering asynchronous event callbacks... 00:08:22.810 [Child] Getting orig temperature thresholds of all controllers 00:08:22.810 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.810 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.810 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.810 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.810 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:22.810 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.810 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.810 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.810 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.810 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.810 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.810 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.810 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.810 [Child] Cleaning up... 00:08:22.810 Asynchronous Event Request test 00:08:22.810 Attached to 0000:00:10.0 00:08:22.810 Attached to 0000:00:11.0 00:08:22.810 Attached to 0000:00:13.0 00:08:22.810 Attached to 0000:00:12.0 00:08:22.810 Reset controller to setup AER completions for this process 00:08:22.810 Registering asynchronous event callbacks... 00:08:22.810 Getting orig temperature thresholds of all controllers 00:08:22.810 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.810 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.810 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.810 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:22.810 Setting all controllers temperature threshold low to trigger AER 00:08:22.810 Waiting for all controllers temperature threshold to be set lower 00:08:22.810 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.810 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:22.810 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.810 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:22.810 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.810 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:22.810 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:22.810 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:22.810 Waiting for all controllers to trigger AER and reset threshold 00:08:22.810 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.810 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.810 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.810 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.810 Cleaning up... 00:08:22.810 ************************************ 00:08:22.810 END TEST nvme_multi_aen 00:08:22.810 ************************************ 00:08:22.810 00:08:22.810 real 0m0.443s 00:08:22.810 user 0m0.128s 00:08:22.810 sys 0m0.192s 00:08:22.810 09:14:14 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.810 09:14:14 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:22.810 09:14:14 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:22.810 09:14:14 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:22.810 09:14:14 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.810 09:14:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.810 ************************************ 00:08:22.810 START TEST nvme_startup 00:08:22.810 ************************************ 00:08:22.810 09:14:14 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:23.069 Initializing NVMe Controllers 00:08:23.069 Attached to 0000:00:10.0 00:08:23.069 Attached to 0000:00:11.0 00:08:23.069 Attached to 0000:00:13.0 00:08:23.069 Attached to 0000:00:12.0 00:08:23.069 Initialization complete. 00:08:23.069 Time used:138848.297 (us). 00:08:23.069 00:08:23.069 real 0m0.194s 00:08:23.069 user 0m0.055s 00:08:23.069 sys 0m0.091s 00:08:23.069 ************************************ 00:08:23.069 END TEST nvme_startup 00:08:23.069 ************************************ 00:08:23.069 09:14:14 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.069 09:14:14 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:23.070 09:14:14 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:23.070 09:14:14 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.070 09:14:14 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.070 09:14:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.070 ************************************ 00:08:23.070 START TEST nvme_multi_secondary 00:08:23.070 ************************************ 00:08:23.070 09:14:14 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:23.070 09:14:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64158 00:08:23.070 09:14:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64159 00:08:23.070 09:14:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:23.070 09:14:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:23.070 09:14:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:26.352 Initializing NVMe Controllers 00:08:26.352 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:26.352 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:26.352 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:26.352 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:26.352 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:26.352 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:26.352 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:26.352 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:26.352 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:26.352 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:26.352 Initialization complete. Launching workers. 00:08:26.352 ======================================================== 00:08:26.352 Latency(us) 00:08:26.352 Device Information : IOPS MiB/s Average min max 00:08:26.352 PCIE (0000:00:10.0) NSID 1 from core 2: 3362.67 13.14 4755.36 819.14 12440.35 00:08:26.352 PCIE (0000:00:11.0) NSID 1 from core 2: 3362.67 13.14 4757.69 822.79 12909.86 00:08:26.352 PCIE (0000:00:13.0) NSID 1 from core 2: 3362.67 13.14 4757.38 831.43 16044.03 00:08:26.352 PCIE (0000:00:12.0) NSID 1 from core 2: 3362.67 13.14 4757.72 821.78 16147.67 00:08:26.352 PCIE (0000:00:12.0) NSID 2 from core 2: 3362.67 13.14 4757.70 828.67 12506.82 00:08:26.352 PCIE (0000:00:12.0) NSID 3 from core 2: 3362.67 13.14 4757.69 821.65 13208.51 00:08:26.352 ======================================================== 00:08:26.352 Total : 20176.03 78.81 4757.26 819.14 16147.67 00:08:26.352 00:08:26.352 09:14:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64158 00:08:26.352 Initializing NVMe Controllers 00:08:26.352 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:26.352 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:26.352 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:26.352 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:26.352 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:26.352 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:26.352 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:26.352 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:26.352 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:26.352 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:26.352 Initialization complete. Launching workers. 00:08:26.352 ======================================================== 00:08:26.352 Latency(us) 00:08:26.352 Device Information : IOPS MiB/s Average min max 00:08:26.352 PCIE (0000:00:10.0) NSID 1 from core 1: 7882.37 30.79 2028.52 950.54 5843.20 00:08:26.352 PCIE (0000:00:11.0) NSID 1 from core 1: 7882.37 30.79 2029.46 1057.03 5977.58 00:08:26.352 PCIE (0000:00:13.0) NSID 1 from core 1: 7882.37 30.79 2029.47 1039.05 5726.55 00:08:26.352 PCIE (0000:00:12.0) NSID 1 from core 1: 7882.37 30.79 2029.44 1029.79 5871.45 00:08:26.352 PCIE (0000:00:12.0) NSID 2 from core 1: 7882.37 30.79 2029.40 1034.82 5529.83 00:08:26.352 PCIE (0000:00:12.0) NSID 3 from core 1: 7882.37 30.79 2029.37 1033.86 5638.32 00:08:26.352 ======================================================== 00:08:26.352 Total : 47294.22 184.74 2029.27 950.54 5977.58 00:08:26.352 00:08:28.253 Initializing NVMe Controllers 00:08:28.253 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:28.253 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:28.253 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:28.253 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:28.253 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:28.253 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:28.253 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:28.253 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:28.253 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:28.253 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:28.253 Initialization complete. Launching workers. 00:08:28.253 ======================================================== 00:08:28.253 Latency(us) 00:08:28.253 Device Information : IOPS MiB/s Average min max 00:08:28.253 PCIE (0000:00:10.0) NSID 1 from core 0: 11144.78 43.53 1434.43 688.06 7560.21 00:08:28.253 PCIE (0000:00:11.0) NSID 1 from core 0: 11144.78 43.53 1435.27 680.92 8229.97 00:08:28.253 PCIE (0000:00:13.0) NSID 1 from core 0: 11144.78 43.53 1435.25 656.89 7585.80 00:08:28.253 PCIE (0000:00:12.0) NSID 1 from core 0: 11144.78 43.53 1435.24 637.44 6530.81 00:08:28.253 PCIE (0000:00:12.0) NSID 2 from core 0: 11144.78 43.53 1435.22 591.27 6227.04 00:08:28.253 PCIE (0000:00:12.0) NSID 3 from core 0: 11144.78 43.53 1435.21 584.03 6853.86 00:08:28.253 ======================================================== 00:08:28.253 Total : 66868.68 261.21 1435.10 584.03 8229.97 00:08:28.253 00:08:28.253 09:14:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64159 00:08:28.253 09:14:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64228 00:08:28.253 09:14:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:28.253 09:14:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64229 00:08:28.253 09:14:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:28.254 09:14:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:31.583 Initializing NVMe Controllers 00:08:31.583 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.583 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.583 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.583 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.583 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:31.583 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:31.583 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:31.583 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:31.583 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:31.583 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:31.583 Initialization complete. Launching workers. 00:08:31.583 ======================================================== 00:08:31.583 Latency(us) 00:08:31.583 Device Information : IOPS MiB/s Average min max 00:08:31.583 PCIE (0000:00:10.0) NSID 1 from core 1: 8148.53 31.83 1962.23 689.34 5783.62 00:08:31.583 PCIE (0000:00:11.0) NSID 1 from core 1: 8148.53 31.83 1963.23 717.23 5400.17 00:08:31.583 PCIE (0000:00:13.0) NSID 1 from core 1: 8148.53 31.83 1963.26 729.03 6058.47 00:08:31.583 PCIE (0000:00:12.0) NSID 1 from core 1: 8148.53 31.83 1963.30 724.87 6201.61 00:08:31.583 PCIE (0000:00:12.0) NSID 2 from core 1: 8148.53 31.83 1963.30 722.00 6159.70 00:08:31.583 PCIE (0000:00:12.0) NSID 3 from core 1: 8148.53 31.83 1963.48 713.08 6021.96 00:08:31.583 ======================================================== 00:08:31.583 Total : 48891.18 190.98 1963.13 689.34 6201.61 00:08:31.583 00:08:31.583 Initializing NVMe Controllers 00:08:31.583 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.583 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.583 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.583 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.583 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:31.583 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:31.583 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:31.583 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:31.583 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:31.583 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:31.583 Initialization complete. Launching workers. 00:08:31.583 ======================================================== 00:08:31.583 Latency(us) 00:08:31.583 Device Information : IOPS MiB/s Average min max 00:08:31.583 PCIE (0000:00:10.0) NSID 1 from core 0: 8002.52 31.26 1998.02 696.56 5785.44 00:08:31.583 PCIE (0000:00:11.0) NSID 1 from core 0: 8002.52 31.26 1998.97 722.74 5617.63 00:08:31.583 PCIE (0000:00:13.0) NSID 1 from core 0: 8002.52 31.26 1998.94 724.88 6000.47 00:08:31.583 PCIE (0000:00:12.0) NSID 1 from core 0: 8002.52 31.26 1998.88 721.03 5995.85 00:08:31.583 PCIE (0000:00:12.0) NSID 2 from core 0: 8002.52 31.26 1998.84 732.89 5719.27 00:08:31.583 PCIE (0000:00:12.0) NSID 3 from core 0: 8002.52 31.26 1998.86 729.53 6253.02 00:08:31.583 ======================================================== 00:08:31.583 Total : 48015.13 187.56 1998.75 696.56 6253.02 00:08:31.583 00:08:34.126 Initializing NVMe Controllers 00:08:34.126 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:34.126 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:34.126 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:34.126 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:34.126 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:34.126 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:34.126 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:34.126 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:34.126 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:34.126 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:34.126 Initialization complete. Launching workers. 00:08:34.126 ======================================================== 00:08:34.126 Latency(us) 00:08:34.126 Device Information : IOPS MiB/s Average min max 00:08:34.126 PCIE (0000:00:10.0) NSID 1 from core 2: 4655.75 18.19 3434.09 729.09 12187.65 00:08:34.126 PCIE (0000:00:11.0) NSID 1 from core 2: 4655.75 18.19 3435.87 710.90 12674.96 00:08:34.126 PCIE (0000:00:13.0) NSID 1 from core 2: 4655.75 18.19 3435.99 736.70 12125.53 00:08:34.126 PCIE (0000:00:12.0) NSID 1 from core 2: 4655.75 18.19 3435.93 730.00 12273.14 00:08:34.126 PCIE (0000:00:12.0) NSID 2 from core 2: 4655.75 18.19 3435.88 683.56 13136.72 00:08:34.126 PCIE (0000:00:12.0) NSID 3 from core 2: 4655.75 18.19 3435.47 669.72 12612.12 00:08:34.126 ======================================================== 00:08:34.126 Total : 27934.52 109.12 3435.54 669.72 13136.72 00:08:34.126 00:08:34.126 ************************************ 00:08:34.126 END TEST nvme_multi_secondary 00:08:34.126 ************************************ 00:08:34.126 09:14:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64228 00:08:34.126 09:14:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64229 00:08:34.126 00:08:34.126 real 0m10.663s 00:08:34.126 user 0m18.348s 00:08:34.126 sys 0m0.632s 00:08:34.126 09:14:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.126 09:14:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:34.126 09:14:25 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:34.126 09:14:25 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:34.126 09:14:25 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/63190 ]] 00:08:34.126 09:14:25 nvme -- common/autotest_common.sh@1090 -- # kill 63190 00:08:34.126 09:14:25 nvme -- common/autotest_common.sh@1091 -- # wait 63190 00:08:34.126 [2024-10-08 09:14:25.292023] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.292250] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.292287] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.292306] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.294789] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.294856] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.294874] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.294893] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.297277] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.297328] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.297345] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.297363] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.299746] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.299802] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.299819] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 [2024-10-08 09:14:25.299837] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64107) is not found. Dropping the request. 00:08:34.126 09:14:25 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:34.126 09:14:25 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:34.126 09:14:25 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:34.126 09:14:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.126 09:14:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.126 09:14:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.126 ************************************ 00:08:34.126 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:34.126 ************************************ 00:08:34.126 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:34.126 * Looking for test storage... 00:08:34.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:34.126 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:34.126 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:34.126 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:08:34.126 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:34.126 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.126 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.126 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.126 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:34.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.127 --rc genhtml_branch_coverage=1 00:08:34.127 --rc genhtml_function_coverage=1 00:08:34.127 --rc genhtml_legend=1 00:08:34.127 --rc geninfo_all_blocks=1 00:08:34.127 --rc geninfo_unexecuted_blocks=1 00:08:34.127 00:08:34.127 ' 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:34.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.127 --rc genhtml_branch_coverage=1 00:08:34.127 --rc genhtml_function_coverage=1 00:08:34.127 --rc genhtml_legend=1 00:08:34.127 --rc geninfo_all_blocks=1 00:08:34.127 --rc geninfo_unexecuted_blocks=1 00:08:34.127 00:08:34.127 ' 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:34.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.127 --rc genhtml_branch_coverage=1 00:08:34.127 --rc genhtml_function_coverage=1 00:08:34.127 --rc genhtml_legend=1 00:08:34.127 --rc geninfo_all_blocks=1 00:08:34.127 --rc geninfo_unexecuted_blocks=1 00:08:34.127 00:08:34.127 ' 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:34.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.127 --rc genhtml_branch_coverage=1 00:08:34.127 --rc genhtml_function_coverage=1 00:08:34.127 --rc genhtml_legend=1 00:08:34.127 --rc geninfo_all_blocks=1 00:08:34.127 --rc geninfo_unexecuted_blocks=1 00:08:34.127 00:08:34.127 ' 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:34.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64391 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64391 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 64391 ']' 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.127 09:14:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:34.127 [2024-10-08 09:14:25.730902] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:08:34.127 [2024-10-08 09:14:25.731024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64391 ] 00:08:34.386 [2024-10-08 09:14:25.894484] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.644 [2024-10-08 09:14:26.085550] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.644 [2024-10-08 09:14:26.085839] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.644 [2024-10-08 09:14:26.085887] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.644 [2024-10-08 09:14:26.085913] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.209 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:35.210 nvme0n1 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_HACEz.txt 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:35.210 true 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728378866 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64414 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:35.210 09:14:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:37.150 [2024-10-08 09:14:28.785851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:37.150 [2024-10-08 09:14:28.786225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:37.150 [2024-10-08 09:14:28.786323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:37.150 [2024-10-08 09:14:28.786384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:37.150 [2024-10-08 09:14:28.787853] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64414 00:08:37.150 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64414 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64414 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:37.150 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_HACEz.txt 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_HACEz.txt 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64391 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 64391 ']' 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 64391 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64391 00:08:37.409 killing process with pid 64391 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64391' 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 64391 00:08:37.409 09:14:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 64391 00:08:38.785 09:14:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:38.785 09:14:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:38.785 ************************************ 00:08:38.785 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:38.785 ************************************ 00:08:38.785 00:08:38.785 real 0m4.698s 00:08:38.785 user 0m16.211s 00:08:38.785 sys 0m0.518s 00:08:38.785 09:14:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.785 09:14:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:38.785 09:14:30 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:38.785 09:14:30 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:38.785 09:14:30 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.785 09:14:30 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.785 09:14:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.785 ************************************ 00:08:38.785 START TEST nvme_fio 00:08:38.785 ************************************ 00:08:38.785 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:08:38.785 09:14:30 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:38.785 09:14:30 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:38.785 09:14:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:38.785 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:38.785 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:38.785 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:38.785 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:38.785 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:38.785 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:38.785 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:38.785 09:14:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:38.785 09:14:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:38.785 09:14:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:38.785 09:14:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:38.785 09:14:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:38.785 09:14:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:38.785 09:14:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:39.043 09:14:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:39.043 09:14:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:39.043 09:14:30 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:39.301 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:39.301 fio-3.35 00:08:39.301 Starting 1 thread 00:08:45.854 00:08:45.854 test: (groupid=0, jobs=1): err= 0: pid=64548: Tue Oct 8 09:14:36 2024 00:08:45.854 read: IOPS=22.1k, BW=86.4MiB/s (90.6MB/s)(173MiB/2001msec) 00:08:45.854 slat (nsec): min=3308, max=81356, avg=5093.28, stdev=2440.29 00:08:45.854 clat (usec): min=270, max=12846, avg=2891.12, stdev=976.14 00:08:45.854 lat (usec): min=276, max=12905, avg=2896.22, stdev=977.38 00:08:45.854 clat percentiles (usec): 00:08:45.854 | 1.00th=[ 1500], 5.00th=[ 2089], 10.00th=[ 2278], 20.00th=[ 2376], 00:08:45.854 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2606], 00:08:45.854 | 70.00th=[ 2835], 80.00th=[ 3228], 90.00th=[ 4293], 95.00th=[ 5342], 00:08:45.854 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 7570], 99.95th=[ 9503], 00:08:45.854 | 99.99th=[12518] 00:08:45.854 bw ( KiB/s): min=82880, max=92344, per=97.25%, avg=86037.33, stdev=5461.74, samples=3 00:08:45.854 iops : min=20720, max=23086, avg=21509.33, stdev=1365.43, samples=3 00:08:45.854 write: IOPS=22.0k, BW=85.8MiB/s (90.0MB/s)(172MiB/2001msec); 0 zone resets 00:08:45.854 slat (nsec): min=3431, max=73117, avg=5273.66, stdev=2292.95 00:08:45.854 clat (usec): min=294, max=12649, avg=2891.99, stdev=962.30 00:08:45.854 lat (usec): min=300, max=12662, avg=2897.27, stdev=963.42 00:08:45.854 clat percentiles (usec): 00:08:45.855 | 1.00th=[ 1516], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2376], 00:08:45.855 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2638], 00:08:45.855 | 70.00th=[ 2835], 80.00th=[ 3228], 90.00th=[ 4228], 95.00th=[ 5276], 00:08:45.855 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 7767], 99.95th=[10028], 00:08:45.855 | 99.99th=[12125] 00:08:45.855 bw ( KiB/s): min=82864, max=92640, per=98.08%, avg=86197.33, stdev=5580.64, samples=3 00:08:45.855 iops : min=20716, max=23160, avg=21549.33, stdev=1395.16, samples=3 00:08:45.855 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.10% 00:08:45.855 lat (msec) : 2=3.68%, 4=84.33%, 10=11.82%, 20=0.05% 00:08:45.855 cpu : usr=99.10%, sys=0.20%, ctx=3, majf=0, minf=607 00:08:45.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:45.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:45.855 issued rwts: total=44255,43964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:45.855 00:08:45.855 Run status group 0 (all jobs): 00:08:45.855 READ: bw=86.4MiB/s (90.6MB/s), 86.4MiB/s-86.4MiB/s (90.6MB/s-90.6MB/s), io=173MiB (181MB), run=2001-2001msec 00:08:45.855 WRITE: bw=85.8MiB/s (90.0MB/s), 85.8MiB/s-85.8MiB/s (90.0MB/s-90.0MB/s), io=172MiB (180MB), run=2001-2001msec 00:08:45.855 ----------------------------------------------------- 00:08:45.855 Suppressions used: 00:08:45.855 count bytes template 00:08:45.855 1 32 /usr/src/fio/parse.c 00:08:45.855 1 8 libtcmalloc_minimal.so 00:08:45.855 ----------------------------------------------------- 00:08:45.855 00:08:45.855 09:14:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:45.855 09:14:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:45.855 09:14:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:45.855 09:14:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:45.855 09:14:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:45.855 09:14:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:45.855 09:14:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:45.855 09:14:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:45.855 09:14:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:45.855 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:45.855 fio-3.35 00:08:45.855 Starting 1 thread 00:08:51.211 00:08:51.211 test: (groupid=0, jobs=1): err= 0: pid=64603: Tue Oct 8 09:14:42 2024 00:08:51.211 read: IOPS=21.8k, BW=85.2MiB/s (89.3MB/s)(170MiB/2001msec) 00:08:51.211 slat (usec): min=3, max=160, avg= 5.14, stdev= 2.44 00:08:51.211 clat (usec): min=891, max=8781, avg=2924.99, stdev=1002.57 00:08:51.211 lat (usec): min=895, max=8797, avg=2930.13, stdev=1003.65 00:08:51.211 clat percentiles (usec): 00:08:51.211 | 1.00th=[ 1663], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2376], 00:08:51.211 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2573], 00:08:51.211 | 70.00th=[ 2802], 80.00th=[ 3294], 90.00th=[ 4555], 95.00th=[ 5342], 00:08:51.211 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7767], 99.95th=[ 8029], 00:08:51.211 | 99.99th=[ 8586] 00:08:51.211 bw ( KiB/s): min=73440, max=100608, per=100.00%, avg=88445.33, stdev=13805.28, samples=3 00:08:51.211 iops : min=18360, max=25152, avg=22111.33, stdev=3451.32, samples=3 00:08:51.211 write: IOPS=21.6k, BW=84.6MiB/s (88.7MB/s)(169MiB/2001msec); 0 zone resets 00:08:51.211 slat (usec): min=3, max=110, avg= 5.37, stdev= 2.32 00:08:51.211 clat (usec): min=905, max=8658, avg=2943.95, stdev=1009.41 00:08:51.211 lat (usec): min=910, max=8663, avg=2949.32, stdev=1010.46 00:08:51.211 clat percentiles (usec): 00:08:51.211 | 1.00th=[ 1680], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2376], 00:08:51.211 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2606], 00:08:51.211 | 70.00th=[ 2835], 80.00th=[ 3326], 90.00th=[ 4621], 95.00th=[ 5342], 00:08:51.211 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7701], 99.95th=[ 7963], 00:08:51.211 | 99.99th=[ 8291] 00:08:51.211 bw ( KiB/s): min=74808, max=100016, per=100.00%, avg=88589.33, stdev=12767.90, samples=3 00:08:51.211 iops : min=18702, max=25004, avg=22147.33, stdev=3191.97, samples=3 00:08:51.211 lat (usec) : 1000=0.02% 00:08:51.211 lat (msec) : 2=2.64%, 4=83.34%, 10=14.00% 00:08:51.211 cpu : usr=99.15%, sys=0.05%, ctx=2, majf=0, minf=608 00:08:51.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:51.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:51.211 issued rwts: total=43623,43318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:51.211 00:08:51.211 Run status group 0 (all jobs): 00:08:51.211 READ: bw=85.2MiB/s (89.3MB/s), 85.2MiB/s-85.2MiB/s (89.3MB/s-89.3MB/s), io=170MiB (179MB), run=2001-2001msec 00:08:51.211 WRITE: bw=84.6MiB/s (88.7MB/s), 84.6MiB/s-84.6MiB/s (88.7MB/s-88.7MB/s), io=169MiB (177MB), run=2001-2001msec 00:08:51.469 ----------------------------------------------------- 00:08:51.469 Suppressions used: 00:08:51.469 count bytes template 00:08:51.469 1 32 /usr/src/fio/parse.c 00:08:51.469 1 8 libtcmalloc_minimal.so 00:08:51.469 ----------------------------------------------------- 00:08:51.469 00:08:51.469 09:14:42 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:51.469 09:14:42 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:51.469 09:14:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:51.469 09:14:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:51.727 09:14:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:51.727 09:14:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:51.984 09:14:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:51.984 09:14:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.984 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.984 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:51.984 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:51.984 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:51.984 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.985 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:51.985 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:51.985 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:51.985 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:51.985 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.985 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:51.985 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:51.985 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:51.985 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:51.985 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:51.985 09:14:43 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.985 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:51.985 fio-3.35 00:08:51.985 Starting 1 thread 00:09:00.092 00:09:00.092 test: (groupid=0, jobs=1): err= 0: pid=64658: Tue Oct 8 09:14:50 2024 00:09:00.092 read: IOPS=25.0k, BW=97.5MiB/s (102MB/s)(195MiB/2001msec) 00:09:00.092 slat (nsec): min=4192, max=52097, avg=4849.87, stdev=1835.14 00:09:00.092 clat (usec): min=341, max=11985, avg=2557.16, stdev=698.48 00:09:00.092 lat (usec): min=345, max=12028, avg=2562.01, stdev=699.60 00:09:00.092 clat percentiles (usec): 00:09:00.092 | 1.00th=[ 1418], 5.00th=[ 1975], 10.00th=[ 2212], 20.00th=[ 2343], 00:09:00.092 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2442], 00:09:00.092 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2769], 95.00th=[ 3884], 00:09:00.092 | 99.00th=[ 5669], 99.50th=[ 6259], 99.90th=[ 7373], 99.95th=[ 8291], 00:09:00.092 | 99.99th=[11600] 00:09:00.092 bw ( KiB/s): min=96344, max=102272, per=98.58%, avg=98437.33, stdev=3325.58, samples=3 00:09:00.092 iops : min=24086, max=25568, avg=24609.33, stdev=831.39, samples=3 00:09:00.092 write: IOPS=24.8k, BW=97.0MiB/s (102MB/s)(194MiB/2001msec); 0 zone resets 00:09:00.092 slat (nsec): min=4280, max=44711, avg=5122.72, stdev=1840.36 00:09:00.092 clat (usec): min=206, max=11812, avg=2563.71, stdev=702.23 00:09:00.092 lat (usec): min=211, max=11825, avg=2568.83, stdev=703.35 00:09:00.092 clat percentiles (usec): 00:09:00.092 | 1.00th=[ 1418], 5.00th=[ 1975], 10.00th=[ 2212], 20.00th=[ 2343], 00:09:00.092 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2442], 00:09:00.092 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2802], 95.00th=[ 3982], 00:09:00.092 | 99.00th=[ 5669], 99.50th=[ 6259], 99.90th=[ 7439], 99.95th=[ 8717], 00:09:00.092 | 99.99th=[11338] 00:09:00.092 bw ( KiB/s): min=96472, max=102248, per=99.17%, avg=98474.67, stdev=3269.86, samples=3 00:09:00.092 iops : min=24118, max=25562, avg=24618.67, stdev=817.47, samples=3 00:09:00.092 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.07% 00:09:00.092 lat (msec) : 2=5.29%, 4=89.76%, 10=4.82%, 20=0.03% 00:09:00.092 cpu : usr=99.25%, sys=0.10%, ctx=5, majf=0, minf=607 00:09:00.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:00.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.092 issued rwts: total=49950,49673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.092 00:09:00.092 Run status group 0 (all jobs): 00:09:00.092 READ: bw=97.5MiB/s (102MB/s), 97.5MiB/s-97.5MiB/s (102MB/s-102MB/s), io=195MiB (205MB), run=2001-2001msec 00:09:00.092 WRITE: bw=97.0MiB/s (102MB/s), 97.0MiB/s-97.0MiB/s (102MB/s-102MB/s), io=194MiB (203MB), run=2001-2001msec 00:09:00.092 ----------------------------------------------------- 00:09:00.092 Suppressions used: 00:09:00.092 count bytes template 00:09:00.092 1 32 /usr/src/fio/parse.c 00:09:00.092 1 8 libtcmalloc_minimal.so 00:09:00.092 ----------------------------------------------------- 00:09:00.092 00:09:00.092 09:14:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:00.092 09:14:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:00.092 09:14:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:00.092 09:14:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:00.093 09:14:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:00.093 09:14:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:00.093 09:14:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:00.093 09:14:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:00.093 09:14:51 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:00.093 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:00.093 fio-3.35 00:09:00.093 Starting 1 thread 00:09:12.299 00:09:12.299 test: (groupid=0, jobs=1): err= 0: pid=64719: Tue Oct 8 09:15:03 2024 00:09:12.299 read: IOPS=23.8k, BW=93.2MiB/s (97.7MB/s)(186MiB/2001msec) 00:09:12.299 slat (usec): min=4, max=123, avg= 5.09, stdev= 2.38 00:09:12.299 clat (usec): min=375, max=11382, avg=2681.10, stdev=860.51 00:09:12.299 lat (usec): min=379, max=11505, avg=2686.18, stdev=862.09 00:09:12.299 clat percentiles (usec): 00:09:12.299 | 1.00th=[ 1696], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:12.299 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2442], 00:09:12.299 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 3195], 95.00th=[ 5080], 00:09:12.299 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 7832], 99.95th=[ 8225], 00:09:12.299 | 99.99th=[11076] 00:09:12.299 bw ( KiB/s): min=93752, max=97216, per=99.76%, avg=95168.00, stdev=1816.42, samples=3 00:09:12.299 iops : min=23438, max=24304, avg=23792.00, stdev=454.11, samples=3 00:09:12.299 write: IOPS=23.7k, BW=92.6MiB/s (97.1MB/s)(185MiB/2001msec); 0 zone resets 00:09:12.299 slat (nsec): min=4334, max=71163, avg=5353.43, stdev=2320.88 00:09:12.299 clat (usec): min=210, max=11187, avg=2682.22, stdev=854.03 00:09:12.299 lat (usec): min=215, max=11198, avg=2687.58, stdev=855.57 00:09:12.299 clat percentiles (usec): 00:09:12.299 | 1.00th=[ 1713], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:12.299 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:12.299 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 3195], 95.00th=[ 5080], 00:09:12.299 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 7898], 99.95th=[ 8455], 00:09:12.299 | 99.99th=[10683] 00:09:12.299 bw ( KiB/s): min=93552, max=96576, per=100.00%, avg=95170.67, stdev=1523.25, samples=3 00:09:12.299 iops : min=23388, max=24144, avg=23792.67, stdev=380.81, samples=3 00:09:12.299 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:12.299 lat (msec) : 2=2.60%, 4=89.56%, 10=7.77%, 20=0.02% 00:09:12.299 cpu : usr=99.30%, sys=0.00%, ctx=9, majf=0, minf=606 00:09:12.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:12.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.299 issued rwts: total=47720,47421,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.299 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.299 00:09:12.299 Run status group 0 (all jobs): 00:09:12.299 READ: bw=93.2MiB/s (97.7MB/s), 93.2MiB/s-93.2MiB/s (97.7MB/s-97.7MB/s), io=186MiB (195MB), run=2001-2001msec 00:09:12.299 WRITE: bw=92.6MiB/s (97.1MB/s), 92.6MiB/s-92.6MiB/s (97.1MB/s-97.1MB/s), io=185MiB (194MB), run=2001-2001msec 00:09:12.299 ----------------------------------------------------- 00:09:12.299 Suppressions used: 00:09:12.299 count bytes template 00:09:12.299 1 32 /usr/src/fio/parse.c 00:09:12.299 1 8 libtcmalloc_minimal.so 00:09:12.299 ----------------------------------------------------- 00:09:12.299 00:09:12.299 09:15:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:12.299 09:15:03 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:12.299 00:09:12.299 real 0m33.368s 00:09:12.299 user 0m19.536s 00:09:12.299 sys 0m25.321s 00:09:12.299 09:15:03 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.299 ************************************ 00:09:12.299 END TEST nvme_fio 00:09:12.299 ************************************ 00:09:12.299 09:15:03 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:12.299 ************************************ 00:09:12.299 END TEST nvme 00:09:12.299 ************************************ 00:09:12.299 00:09:12.299 real 1m42.042s 00:09:12.299 user 3m38.599s 00:09:12.299 sys 0m35.653s 00:09:12.299 09:15:03 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.299 09:15:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:12.299 09:15:03 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:12.299 09:15:03 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:12.299 09:15:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:12.299 09:15:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.299 09:15:03 -- common/autotest_common.sh@10 -- # set +x 00:09:12.299 ************************************ 00:09:12.299 START TEST nvme_scc 00:09:12.299 ************************************ 00:09:12.299 09:15:03 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:12.299 * Looking for test storage... 00:09:12.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:12.299 09:15:03 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:12.299 09:15:03 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:12.299 09:15:03 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:12.299 09:15:03 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.299 09:15:03 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:12.299 09:15:03 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.299 09:15:03 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:12.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.299 --rc genhtml_branch_coverage=1 00:09:12.299 --rc genhtml_function_coverage=1 00:09:12.299 --rc genhtml_legend=1 00:09:12.299 --rc geninfo_all_blocks=1 00:09:12.299 --rc geninfo_unexecuted_blocks=1 00:09:12.299 00:09:12.299 ' 00:09:12.299 09:15:03 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:12.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.299 --rc genhtml_branch_coverage=1 00:09:12.299 --rc genhtml_function_coverage=1 00:09:12.299 --rc genhtml_legend=1 00:09:12.299 --rc geninfo_all_blocks=1 00:09:12.299 --rc geninfo_unexecuted_blocks=1 00:09:12.299 00:09:12.299 ' 00:09:12.299 09:15:03 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:12.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.299 --rc genhtml_branch_coverage=1 00:09:12.299 --rc genhtml_function_coverage=1 00:09:12.299 --rc genhtml_legend=1 00:09:12.299 --rc geninfo_all_blocks=1 00:09:12.299 --rc geninfo_unexecuted_blocks=1 00:09:12.299 00:09:12.299 ' 00:09:12.299 09:15:03 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:12.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.300 --rc genhtml_branch_coverage=1 00:09:12.300 --rc genhtml_function_coverage=1 00:09:12.300 --rc genhtml_legend=1 00:09:12.300 --rc geninfo_all_blocks=1 00:09:12.300 --rc geninfo_unexecuted_blocks=1 00:09:12.300 00:09:12.300 ' 00:09:12.300 09:15:03 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.300 09:15:03 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.300 09:15:03 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.300 09:15:03 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.300 09:15:03 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.300 09:15:03 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.300 09:15:03 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.300 09:15:03 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.300 09:15:03 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:12.300 09:15:03 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:12.300 09:15:03 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:12.300 09:15:03 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.300 09:15:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:12.300 09:15:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:12.300 09:15:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:12.300 09:15:03 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:12.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:12.560 Waiting for block devices as requested 00:09:12.820 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.820 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.820 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.820 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.118 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:18.118 09:15:09 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:18.118 09:15:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:18.118 09:15:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:18.118 09:15:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.118 09:15:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.118 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.119 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:18.120 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.121 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.122 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:18.123 09:15:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:18.123 09:15:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:18.123 09:15:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.123 09:15:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.123 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.124 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.125 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.126 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:18.127 09:15:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:18.127 09:15:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:18.127 09:15:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.127 09:15:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.127 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.128 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:18.129 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.130 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.131 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.132 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.133 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:18.134 09:15:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:18.134 09:15:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:18.134 09:15:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:18.134 09:15:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:18.134 09:15:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.135 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.136 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.398 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.399 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:18.400 09:15:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:18.400 09:15:09 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:18.400 09:15:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:18.400 09:15:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:18.400 09:15:09 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:18.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:19.228 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.228 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.228 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.228 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.228 09:15:10 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:19.228 09:15:10 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:19.229 09:15:10 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.229 09:15:10 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:19.229 ************************************ 00:09:19.229 START TEST nvme_simple_copy 00:09:19.229 ************************************ 00:09:19.229 09:15:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:19.489 Initializing NVMe Controllers 00:09:19.489 Attaching to 0000:00:10.0 00:09:19.489 Controller supports SCC. Attached to 0000:00:10.0 00:09:19.489 Namespace ID: 1 size: 6GB 00:09:19.489 Initialization complete. 00:09:19.489 00:09:19.489 Controller QEMU NVMe Ctrl (12340 ) 00:09:19.489 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:19.489 Namespace Block Size:4096 00:09:19.489 Writing LBAs 0 to 63 with Random Data 00:09:19.489 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:19.489 LBAs matching Written Data: 64 00:09:19.489 00:09:19.489 real 0m0.244s 00:09:19.489 user 0m0.082s 00:09:19.489 sys 0m0.060s 00:09:19.489 ************************************ 00:09:19.489 END TEST nvme_simple_copy 00:09:19.489 ************************************ 00:09:19.489 09:15:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.489 09:15:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:19.489 ************************************ 00:09:19.489 END TEST nvme_scc 00:09:19.489 ************************************ 00:09:19.489 00:09:19.489 real 0m7.417s 00:09:19.489 user 0m1.019s 00:09:19.489 sys 0m1.287s 00:09:19.489 09:15:11 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.489 09:15:11 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:19.489 09:15:11 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:19.489 09:15:11 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:19.489 09:15:11 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:19.489 09:15:11 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:19.489 09:15:11 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:19.489 09:15:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:19.490 09:15:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.490 09:15:11 -- common/autotest_common.sh@10 -- # set +x 00:09:19.490 ************************************ 00:09:19.490 START TEST nvme_fdp 00:09:19.490 ************************************ 00:09:19.490 09:15:11 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:19.490 * Looking for test storage... 00:09:19.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:19.490 09:15:11 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:19.490 09:15:11 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:19.490 09:15:11 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:19.751 09:15:11 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.751 09:15:11 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:19.751 09:15:11 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.751 09:15:11 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:19.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.751 --rc genhtml_branch_coverage=1 00:09:19.751 --rc genhtml_function_coverage=1 00:09:19.751 --rc genhtml_legend=1 00:09:19.751 --rc geninfo_all_blocks=1 00:09:19.751 --rc geninfo_unexecuted_blocks=1 00:09:19.751 00:09:19.751 ' 00:09:19.751 09:15:11 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:19.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.751 --rc genhtml_branch_coverage=1 00:09:19.751 --rc genhtml_function_coverage=1 00:09:19.751 --rc genhtml_legend=1 00:09:19.751 --rc geninfo_all_blocks=1 00:09:19.751 --rc geninfo_unexecuted_blocks=1 00:09:19.751 00:09:19.751 ' 00:09:19.751 09:15:11 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:19.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.751 --rc genhtml_branch_coverage=1 00:09:19.751 --rc genhtml_function_coverage=1 00:09:19.751 --rc genhtml_legend=1 00:09:19.751 --rc geninfo_all_blocks=1 00:09:19.751 --rc geninfo_unexecuted_blocks=1 00:09:19.751 00:09:19.751 ' 00:09:19.751 09:15:11 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:19.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.751 --rc genhtml_branch_coverage=1 00:09:19.751 --rc genhtml_function_coverage=1 00:09:19.751 --rc genhtml_legend=1 00:09:19.751 --rc geninfo_all_blocks=1 00:09:19.751 --rc geninfo_unexecuted_blocks=1 00:09:19.751 00:09:19.751 ' 00:09:19.751 09:15:11 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:19.751 09:15:11 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:19.751 09:15:11 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:19.752 09:15:11 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:19.752 09:15:11 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.752 09:15:11 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.752 09:15:11 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.752 09:15:11 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.752 09:15:11 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.752 09:15:11 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.752 09:15:11 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.752 09:15:11 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.752 09:15:11 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:19.752 09:15:11 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.752 09:15:11 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:19.752 09:15:11 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:19.752 09:15:11 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:19.752 09:15:11 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:19.752 09:15:11 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:19.752 09:15:11 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:19.752 09:15:11 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:19.752 09:15:11 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:19.752 09:15:11 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:19.752 09:15:11 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.752 09:15:11 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:20.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:20.012 Waiting for block devices as requested 00:09:20.272 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:20.273 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:20.273 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:20.533 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.832 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:25.832 09:15:17 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:25.832 09:15:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:25.832 09:15:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:25.832 09:15:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:25.832 09:15:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:25.832 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:25.833 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.834 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.835 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:25.836 09:15:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:25.836 09:15:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:25.836 09:15:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:25.836 09:15:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.836 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.837 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.838 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.839 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:25.840 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:25.841 09:15:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:25.841 09:15:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:25.841 09:15:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:25.841 09:15:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:25.841 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.842 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:25.843 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:25.844 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.845 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:25.846 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:25.847 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:25.848 09:15:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:25.848 09:15:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:25.848 09:15:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:25.848 09:15:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:25.848 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:25.849 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:25.850 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:25.851 09:15:17 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:25.851 09:15:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:25.852 09:15:17 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:25.852 09:15:17 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:25.852 09:15:17 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:25.852 09:15:17 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:26.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:26.683 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.683 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.683 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.683 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.683 09:15:18 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:26.683 09:15:18 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:26.683 09:15:18 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.683 09:15:18 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:26.683 ************************************ 00:09:26.683 START TEST nvme_flexible_data_placement 00:09:26.683 ************************************ 00:09:26.683 09:15:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:26.939 Initializing NVMe Controllers 00:09:26.939 Attaching to 0000:00:13.0 00:09:26.939 Controller supports FDP Attached to 0000:00:13.0 00:09:26.939 Namespace ID: 1 Endurance Group ID: 1 00:09:26.939 Initialization complete. 00:09:26.939 00:09:26.939 ================================== 00:09:26.939 == FDP tests for Namespace: #01 == 00:09:26.939 ================================== 00:09:26.939 00:09:26.939 Get Feature: FDP: 00:09:26.939 ================= 00:09:26.939 Enabled: Yes 00:09:26.939 FDP configuration Index: 0 00:09:26.939 00:09:26.939 FDP configurations log page 00:09:26.939 =========================== 00:09:26.939 Number of FDP configurations: 1 00:09:26.939 Version: 0 00:09:26.939 Size: 112 00:09:26.939 FDP Configuration Descriptor: 0 00:09:26.939 Descriptor Size: 96 00:09:26.939 Reclaim Group Identifier format: 2 00:09:26.939 FDP Volatile Write Cache: Not Present 00:09:26.939 FDP Configuration: Valid 00:09:26.939 Vendor Specific Size: 0 00:09:26.939 Number of Reclaim Groups: 2 00:09:26.939 Number of Recalim Unit Handles: 8 00:09:26.939 Max Placement Identifiers: 128 00:09:26.939 Number of Namespaces Suppprted: 256 00:09:26.939 Reclaim unit Nominal Size: 6000000 bytes 00:09:26.939 Estimated Reclaim Unit Time Limit: Not Reported 00:09:26.939 RUH Desc #000: RUH Type: Initially Isolated 00:09:26.939 RUH Desc #001: RUH Type: Initially Isolated 00:09:26.939 RUH Desc #002: RUH Type: Initially Isolated 00:09:26.939 RUH Desc #003: RUH Type: Initially Isolated 00:09:26.939 RUH Desc #004: RUH Type: Initially Isolated 00:09:26.939 RUH Desc #005: RUH Type: Initially Isolated 00:09:26.939 RUH Desc #006: RUH Type: Initially Isolated 00:09:26.939 RUH Desc #007: RUH Type: Initially Isolated 00:09:26.939 00:09:26.939 FDP reclaim unit handle usage log page 00:09:26.939 ====================================== 00:09:26.939 Number of Reclaim Unit Handles: 8 00:09:26.939 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:26.939 RUH Usage Desc #001: RUH Attributes: Unused 00:09:26.939 RUH Usage Desc #002: RUH Attributes: Unused 00:09:26.939 RUH Usage Desc #003: RUH Attributes: Unused 00:09:26.939 RUH Usage Desc #004: RUH Attributes: Unused 00:09:26.939 RUH Usage Desc #005: RUH Attributes: Unused 00:09:26.939 RUH Usage Desc #006: RUH Attributes: Unused 00:09:26.939 RUH Usage Desc #007: RUH Attributes: Unused 00:09:26.939 00:09:26.939 FDP statistics log page 00:09:26.939 ======================= 00:09:26.939 Host bytes with metadata written: 1114996736 00:09:26.939 Media bytes with metadata written: 1115119616 00:09:26.939 Media bytes erased: 0 00:09:26.939 00:09:26.939 FDP Reclaim unit handle status 00:09:26.939 ============================== 00:09:26.939 Number of RUHS descriptors: 2 00:09:26.939 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000058a8 00:09:26.939 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:26.939 00:09:26.939 FDP write on placement id: 0 success 00:09:26.939 00:09:26.939 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:26.939 00:09:26.939 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:26.939 00:09:26.939 Get Feature: FDP Events for Placement handle: #0 00:09:26.939 ======================== 00:09:26.939 Number of FDP Events: 6 00:09:26.939 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:26.939 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:26.939 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:26.939 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:26.939 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:26.939 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:26.939 00:09:26.939 FDP events log page 00:09:26.939 =================== 00:09:26.939 Number of FDP events: 1 00:09:26.939 FDP Event #0: 00:09:26.939 Event Type: RU Not Written to Capacity 00:09:26.939 Placement Identifier: Valid 00:09:26.939 NSID: Valid 00:09:26.939 Location: Valid 00:09:26.939 Placement Identifier: 0 00:09:26.939 Event Timestamp: 5 00:09:26.939 Namespace Identifier: 1 00:09:26.939 Reclaim Group Identifier: 0 00:09:26.939 Reclaim Unit Handle Identifier: 0 00:09:26.939 00:09:26.939 FDP test passed 00:09:26.939 00:09:26.939 real 0m0.227s 00:09:26.939 user 0m0.065s 00:09:26.939 sys 0m0.060s 00:09:26.939 09:15:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.939 09:15:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:26.939 ************************************ 00:09:26.939 END TEST nvme_flexible_data_placement 00:09:26.939 ************************************ 00:09:26.939 00:09:26.939 real 0m7.442s 00:09:26.939 user 0m0.975s 00:09:26.939 sys 0m1.317s 00:09:26.939 09:15:18 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.939 09:15:18 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:26.939 ************************************ 00:09:26.939 END TEST nvme_fdp 00:09:26.939 ************************************ 00:09:26.939 09:15:18 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:26.939 09:15:18 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:26.939 09:15:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:26.939 09:15:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.939 09:15:18 -- common/autotest_common.sh@10 -- # set +x 00:09:26.939 ************************************ 00:09:26.939 START TEST nvme_rpc 00:09:26.939 ************************************ 00:09:26.939 09:15:18 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:27.196 * Looking for test storage... 00:09:27.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.197 09:15:18 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:27.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.197 --rc genhtml_branch_coverage=1 00:09:27.197 --rc genhtml_function_coverage=1 00:09:27.197 --rc genhtml_legend=1 00:09:27.197 --rc geninfo_all_blocks=1 00:09:27.197 --rc geninfo_unexecuted_blocks=1 00:09:27.197 00:09:27.197 ' 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:27.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.197 --rc genhtml_branch_coverage=1 00:09:27.197 --rc genhtml_function_coverage=1 00:09:27.197 --rc genhtml_legend=1 00:09:27.197 --rc geninfo_all_blocks=1 00:09:27.197 --rc geninfo_unexecuted_blocks=1 00:09:27.197 00:09:27.197 ' 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:27.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.197 --rc genhtml_branch_coverage=1 00:09:27.197 --rc genhtml_function_coverage=1 00:09:27.197 --rc genhtml_legend=1 00:09:27.197 --rc geninfo_all_blocks=1 00:09:27.197 --rc geninfo_unexecuted_blocks=1 00:09:27.197 00:09:27.197 ' 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:27.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.197 --rc genhtml_branch_coverage=1 00:09:27.197 --rc genhtml_function_coverage=1 00:09:27.197 --rc genhtml_legend=1 00:09:27.197 --rc geninfo_all_blocks=1 00:09:27.197 --rc geninfo_unexecuted_blocks=1 00:09:27.197 00:09:27.197 ' 00:09:27.197 09:15:18 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.197 09:15:18 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:27.197 09:15:18 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:27.197 09:15:18 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66081 00:09:27.197 09:15:18 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:27.197 09:15:18 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:27.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.197 09:15:18 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66081 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 66081 ']' 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.197 09:15:18 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.197 [2024-10-08 09:15:18.837100] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:09:27.197 [2024-10-08 09:15:18.837226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66081 ] 00:09:27.454 [2024-10-08 09:15:18.985323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.712 [2024-10-08 09:15:19.165279] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.712 [2024-10-08 09:15:19.165363] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.277 09:15:19 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.277 09:15:19 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:28.277 09:15:19 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:28.535 Nvme0n1 00:09:28.535 09:15:20 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:28.535 09:15:20 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:28.535 request: 00:09:28.535 { 00:09:28.535 "bdev_name": "Nvme0n1", 00:09:28.535 "filename": "non_existing_file", 00:09:28.535 "method": "bdev_nvme_apply_firmware", 00:09:28.535 "req_id": 1 00:09:28.535 } 00:09:28.535 Got JSON-RPC error response 00:09:28.535 response: 00:09:28.535 { 00:09:28.535 "code": -32603, 00:09:28.535 "message": "open file failed." 00:09:28.535 } 00:09:28.792 09:15:20 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:28.792 09:15:20 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:28.792 09:15:20 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:28.792 09:15:20 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:28.792 09:15:20 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66081 00:09:28.792 09:15:20 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 66081 ']' 00:09:28.792 09:15:20 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 66081 00:09:28.792 09:15:20 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:28.792 09:15:20 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.792 09:15:20 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66081 00:09:28.792 killing process with pid 66081 00:09:28.792 09:15:20 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:28.792 09:15:20 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:28.792 09:15:20 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66081' 00:09:28.792 09:15:20 nvme_rpc -- common/autotest_common.sh@969 -- # kill 66081 00:09:28.792 09:15:20 nvme_rpc -- common/autotest_common.sh@974 -- # wait 66081 00:09:30.701 00:09:30.701 real 0m3.409s 00:09:30.701 user 0m6.338s 00:09:30.701 sys 0m0.508s 00:09:30.701 09:15:21 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.701 09:15:21 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.701 ************************************ 00:09:30.701 END TEST nvme_rpc 00:09:30.701 ************************************ 00:09:30.701 09:15:22 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:30.701 09:15:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:30.701 09:15:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.701 09:15:22 -- common/autotest_common.sh@10 -- # set +x 00:09:30.701 ************************************ 00:09:30.701 START TEST nvme_rpc_timeouts 00:09:30.701 ************************************ 00:09:30.701 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:30.701 * Looking for test storage... 00:09:30.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:30.701 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:30.701 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:09:30.701 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:30.701 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.701 09:15:22 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:30.701 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.701 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:30.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.701 --rc genhtml_branch_coverage=1 00:09:30.701 --rc genhtml_function_coverage=1 00:09:30.701 --rc genhtml_legend=1 00:09:30.701 --rc geninfo_all_blocks=1 00:09:30.701 --rc geninfo_unexecuted_blocks=1 00:09:30.701 00:09:30.701 ' 00:09:30.701 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:30.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.701 --rc genhtml_branch_coverage=1 00:09:30.701 --rc genhtml_function_coverage=1 00:09:30.701 --rc genhtml_legend=1 00:09:30.701 --rc geninfo_all_blocks=1 00:09:30.701 --rc geninfo_unexecuted_blocks=1 00:09:30.701 00:09:30.701 ' 00:09:30.701 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:30.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.701 --rc genhtml_branch_coverage=1 00:09:30.701 --rc genhtml_function_coverage=1 00:09:30.701 --rc genhtml_legend=1 00:09:30.701 --rc geninfo_all_blocks=1 00:09:30.702 --rc geninfo_unexecuted_blocks=1 00:09:30.702 00:09:30.702 ' 00:09:30.702 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:30.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.702 --rc genhtml_branch_coverage=1 00:09:30.702 --rc genhtml_function_coverage=1 00:09:30.702 --rc genhtml_legend=1 00:09:30.702 --rc geninfo_all_blocks=1 00:09:30.702 --rc geninfo_unexecuted_blocks=1 00:09:30.702 00:09:30.702 ' 00:09:30.702 09:15:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.702 09:15:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66147 00:09:30.702 09:15:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66147 00:09:30.702 09:15:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66179 00:09:30.702 09:15:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:30.702 09:15:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66179 00:09:30.702 09:15:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:30.702 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 66179 ']' 00:09:30.702 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.702 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.702 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.702 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.702 09:15:22 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:30.702 [2024-10-08 09:15:22.223000] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:09:30.702 [2024-10-08 09:15:22.223123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66179 ] 00:09:30.702 [2024-10-08 09:15:22.368862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:30.962 [2024-10-08 09:15:22.547814] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.962 [2024-10-08 09:15:22.547891] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.601 Checking default timeout settings: 00:09:31.601 09:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.601 09:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:31.601 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:31.601 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:31.864 Making settings changes with rpc: 00:09:31.864 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:31.864 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:32.125 Check default vs. modified settings: 00:09:32.126 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:32.126 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66147 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66147 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:32.387 Setting action_on_timeout is changed as expected. 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66147 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66147 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:32.387 Setting timeout_us is changed as expected. 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66147 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66147 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:32.387 Setting timeout_admin_us is changed as expected. 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66147 /tmp/settings_modified_66147 00:09:32.387 09:15:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66179 00:09:32.387 09:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 66179 ']' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 66179 00:09:32.387 09:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:32.387 09:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.387 09:15:23 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66179 00:09:32.387 09:15:24 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.387 killing process with pid 66179 00:09:32.387 09:15:24 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.387 09:15:24 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66179' 00:09:32.387 09:15:24 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 66179 00:09:32.387 09:15:24 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 66179 00:09:33.773 RPC TIMEOUT SETTING TEST PASSED. 00:09:33.773 09:15:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:33.773 00:09:33.773 real 0m3.333s 00:09:33.773 user 0m6.299s 00:09:33.773 sys 0m0.488s 00:09:33.773 09:15:25 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.773 ************************************ 00:09:33.773 END TEST nvme_rpc_timeouts 00:09:33.773 ************************************ 00:09:33.773 09:15:25 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:33.773 09:15:25 -- spdk/autotest.sh@239 -- # uname -s 00:09:33.773 09:15:25 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:33.773 09:15:25 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:33.773 09:15:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:33.773 09:15:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.773 09:15:25 -- common/autotest_common.sh@10 -- # set +x 00:09:33.773 ************************************ 00:09:33.773 START TEST sw_hotplug 00:09:33.773 ************************************ 00:09:33.773 09:15:25 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:34.036 * Looking for test storage... 00:09:34.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:34.036 09:15:25 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:34.036 09:15:25 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:34.036 09:15:25 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:09:34.036 09:15:25 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.036 09:15:25 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:34.036 09:15:25 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.036 09:15:25 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:34.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.036 --rc genhtml_branch_coverage=1 00:09:34.036 --rc genhtml_function_coverage=1 00:09:34.036 --rc genhtml_legend=1 00:09:34.036 --rc geninfo_all_blocks=1 00:09:34.036 --rc geninfo_unexecuted_blocks=1 00:09:34.036 00:09:34.036 ' 00:09:34.036 09:15:25 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:34.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.036 --rc genhtml_branch_coverage=1 00:09:34.036 --rc genhtml_function_coverage=1 00:09:34.036 --rc genhtml_legend=1 00:09:34.036 --rc geninfo_all_blocks=1 00:09:34.036 --rc geninfo_unexecuted_blocks=1 00:09:34.036 00:09:34.036 ' 00:09:34.036 09:15:25 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:34.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.036 --rc genhtml_branch_coverage=1 00:09:34.036 --rc genhtml_function_coverage=1 00:09:34.036 --rc genhtml_legend=1 00:09:34.036 --rc geninfo_all_blocks=1 00:09:34.036 --rc geninfo_unexecuted_blocks=1 00:09:34.036 00:09:34.036 ' 00:09:34.036 09:15:25 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:34.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.036 --rc genhtml_branch_coverage=1 00:09:34.036 --rc genhtml_function_coverage=1 00:09:34.036 --rc genhtml_legend=1 00:09:34.036 --rc geninfo_all_blocks=1 00:09:34.036 --rc geninfo_unexecuted_blocks=1 00:09:34.036 00:09:34.036 ' 00:09:34.036 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:34.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.296 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:34.296 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:34.296 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:34.296 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:34.296 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:34.296 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:34.296 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:34.557 09:15:25 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:34.557 09:15:25 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:34.557 09:15:26 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:34.558 09:15:26 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:34.558 09:15:26 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:34.558 09:15:26 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:34.558 09:15:26 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:34.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.816 Waiting for block devices as requested 00:09:35.077 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.077 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.077 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.077 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:40.361 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:40.362 09:15:31 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:40.362 09:15:31 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:40.621 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:40.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:40.621 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:40.879 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:41.140 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.140 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:41.401 09:15:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67031 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:41.401 09:15:32 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:41.401 09:15:32 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:41.401 09:15:32 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:41.401 09:15:32 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:41.401 09:15:32 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:41.401 09:15:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:41.662 Initializing NVMe Controllers 00:09:41.662 Attaching to 0000:00:10.0 00:09:41.662 Attaching to 0000:00:11.0 00:09:41.662 Attached to 0000:00:10.0 00:09:41.662 Attached to 0000:00:11.0 00:09:41.662 Initialization complete. Starting I/O... 00:09:41.662 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:41.662 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:41.662 00:09:42.604 QEMU NVMe Ctrl (12340 ): 2147 I/Os completed (+2147) 00:09:42.604 QEMU NVMe Ctrl (12341 ): 2153 I/Os completed (+2153) 00:09:42.604 00:09:43.547 QEMU NVMe Ctrl (12340 ): 5426 I/Os completed (+3279) 00:09:43.547 QEMU NVMe Ctrl (12341 ): 5404 I/Os completed (+3251) 00:09:43.547 00:09:44.490 QEMU NVMe Ctrl (12340 ): 9188 I/Os completed (+3762) 00:09:44.490 QEMU NVMe Ctrl (12341 ): 9139 I/Os completed (+3735) 00:09:44.490 00:09:45.895 QEMU NVMe Ctrl (12340 ): 12870 I/Os completed (+3682) 00:09:45.895 QEMU NVMe Ctrl (12341 ): 12830 I/Os completed (+3691) 00:09:45.895 00:09:46.509 QEMU NVMe Ctrl (12340 ): 16722 I/Os completed (+3852) 00:09:46.509 QEMU NVMe Ctrl (12341 ): 16682 I/Os completed (+3852) 00:09:46.509 00:09:47.453 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:47.453 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:47.453 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:47.453 [2024-10-08 09:15:38.919278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:47.453 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:47.453 [2024-10-08 09:15:38.920319] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 [2024-10-08 09:15:38.920449] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 [2024-10-08 09:15:38.920468] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 [2024-10-08 09:15:38.920482] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:47.453 [2024-10-08 09:15:38.922126] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 [2024-10-08 09:15:38.922219] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 [2024-10-08 09:15:38.922245] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 [2024-10-08 09:15:38.922296] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:47.453 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:47.453 [2024-10-08 09:15:38.942006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:47.453 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:47.453 [2024-10-08 09:15:38.942882] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 [2024-10-08 09:15:38.942914] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 [2024-10-08 09:15:38.942941] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 [2024-10-08 09:15:38.942954] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:47.453 [2024-10-08 09:15:38.944273] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 [2024-10-08 09:15:38.944304] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 [2024-10-08 09:15:38.944317] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 [2024-10-08 09:15:38.944329] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.453 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:47.453 EAL: Scan for (pci) bus failed. 00:09:47.453 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:47.453 09:15:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:47.453 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:47.453 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:47.453 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:47.453 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:47.453 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:47.453 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:47.453 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:47.453 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:47.453 Attaching to 0000:00:10.0 00:09:47.453 Attached to 0000:00:10.0 00:09:47.453 QEMU NVMe Ctrl (12340 ): 12 I/Os completed (+12) 00:09:47.453 00:09:47.716 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:47.716 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:47.716 09:15:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:47.716 Attaching to 0000:00:11.0 00:09:47.716 Attached to 0000:00:11.0 00:09:48.657 QEMU NVMe Ctrl (12340 ): 2951 I/Os completed (+2939) 00:09:48.657 QEMU NVMe Ctrl (12341 ): 2708 I/Os completed (+2708) 00:09:48.657 00:09:49.598 QEMU NVMe Ctrl (12340 ): 6646 I/Os completed (+3695) 00:09:49.598 QEMU NVMe Ctrl (12341 ): 6418 I/Os completed (+3710) 00:09:49.598 00:09:50.538 QEMU NVMe Ctrl (12340 ): 10414 I/Os completed (+3768) 00:09:50.538 QEMU NVMe Ctrl (12341 ): 10163 I/Os completed (+3745) 00:09:50.538 00:09:51.477 QEMU NVMe Ctrl (12340 ): 14155 I/Os completed (+3741) 00:09:51.477 QEMU NVMe Ctrl (12341 ): 13901 I/Os completed (+3738) 00:09:51.477 00:09:52.856 QEMU NVMe Ctrl (12340 ): 17885 I/Os completed (+3730) 00:09:52.856 QEMU NVMe Ctrl (12341 ): 17629 I/Os completed (+3728) 00:09:52.856 00:09:53.813 QEMU NVMe Ctrl (12340 ): 21602 I/Os completed (+3717) 00:09:53.813 QEMU NVMe Ctrl (12341 ): 21349 I/Os completed (+3720) 00:09:53.813 00:09:54.760 QEMU NVMe Ctrl (12340 ): 25413 I/Os completed (+3811) 00:09:54.760 QEMU NVMe Ctrl (12341 ): 25178 I/Os completed (+3829) 00:09:54.760 00:09:55.703 QEMU NVMe Ctrl (12340 ): 29125 I/Os completed (+3712) 00:09:55.703 QEMU NVMe Ctrl (12341 ): 28906 I/Os completed (+3728) 00:09:55.703 00:09:56.646 QEMU NVMe Ctrl (12340 ): 32664 I/Os completed (+3539) 00:09:56.646 QEMU NVMe Ctrl (12341 ): 32534 I/Os completed (+3628) 00:09:56.646 00:09:57.585 QEMU NVMe Ctrl (12340 ): 35707 I/Os completed (+3043) 00:09:57.585 QEMU NVMe Ctrl (12341 ): 35574 I/Os completed (+3040) 00:09:57.585 00:09:58.523 QEMU NVMe Ctrl (12340 ): 38843 I/Os completed (+3136) 00:09:58.523 QEMU NVMe Ctrl (12341 ): 38713 I/Os completed (+3139) 00:09:58.523 00:09:59.465 QEMU NVMe Ctrl (12340 ): 42499 I/Os completed (+3656) 00:09:59.465 QEMU NVMe Ctrl (12341 ): 42382 I/Os completed (+3669) 00:09:59.465 00:09:59.726 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:59.726 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:59.726 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.726 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.727 [2024-10-08 09:15:51.202917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:59.727 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:59.727 [2024-10-08 09:15:51.203858] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 [2024-10-08 09:15:51.203897] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 [2024-10-08 09:15:51.203912] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 [2024-10-08 09:15:51.203927] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:59.727 [2024-10-08 09:15:51.205613] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 [2024-10-08 09:15:51.205650] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 [2024-10-08 09:15:51.205661] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 [2024-10-08 09:15:51.205672] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.727 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.727 [2024-10-08 09:15:51.224901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:59.727 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:59.727 [2024-10-08 09:15:51.225778] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 [2024-10-08 09:15:51.225810] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 [2024-10-08 09:15:51.225828] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 [2024-10-08 09:15:51.225840] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:59.727 [2024-10-08 09:15:51.227180] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 [2024-10-08 09:15:51.227208] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 [2024-10-08 09:15:51.227220] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 [2024-10-08 09:15:51.227232] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.727 EAL: Cannot open sysfs resource 00:09:59.727 EAL: pci_scan_one(): cannot parse resource 00:09:59.727 EAL: Scan for (pci) bus failed. 00:09:59.727 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:59.727 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:59.727 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.727 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.727 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:59.727 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:59.727 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.727 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.727 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.727 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:59.727 Attaching to 0000:00:10.0 00:09:59.727 Attached to 0000:00:10.0 00:09:59.987 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:59.987 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.987 09:15:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:59.987 Attaching to 0000:00:11.0 00:09:59.987 Attached to 0000:00:11.0 00:10:00.558 QEMU NVMe Ctrl (12340 ): 2690 I/Os completed (+2690) 00:10:00.558 QEMU NVMe Ctrl (12341 ): 2369 I/Os completed (+2369) 00:10:00.558 00:10:01.500 QEMU NVMe Ctrl (12340 ): 6339 I/Os completed (+3649) 00:10:01.501 QEMU NVMe Ctrl (12341 ): 6021 I/Os completed (+3652) 00:10:01.501 00:10:02.887 QEMU NVMe Ctrl (12340 ): 10137 I/Os completed (+3798) 00:10:02.887 QEMU NVMe Ctrl (12341 ): 9817 I/Os completed (+3796) 00:10:02.887 00:10:03.458 QEMU NVMe Ctrl (12340 ): 13816 I/Os completed (+3679) 00:10:03.458 QEMU NVMe Ctrl (12341 ): 13505 I/Os completed (+3688) 00:10:03.458 00:10:04.844 QEMU NVMe Ctrl (12340 ): 17681 I/Os completed (+3865) 00:10:04.844 QEMU NVMe Ctrl (12341 ): 17365 I/Os completed (+3860) 00:10:04.844 00:10:05.789 QEMU NVMe Ctrl (12340 ): 21404 I/Os completed (+3723) 00:10:05.789 QEMU NVMe Ctrl (12341 ): 21081 I/Os completed (+3716) 00:10:05.789 00:10:06.741 QEMU NVMe Ctrl (12340 ): 25133 I/Os completed (+3729) 00:10:06.741 QEMU NVMe Ctrl (12341 ): 24829 I/Os completed (+3748) 00:10:06.741 00:10:07.686 QEMU NVMe Ctrl (12340 ): 28772 I/Os completed (+3639) 00:10:07.686 QEMU NVMe Ctrl (12341 ): 28457 I/Os completed (+3628) 00:10:07.686 00:10:08.631 QEMU NVMe Ctrl (12340 ): 32405 I/Os completed (+3633) 00:10:08.631 QEMU NVMe Ctrl (12341 ): 32108 I/Os completed (+3651) 00:10:08.631 00:10:09.574 QEMU NVMe Ctrl (12340 ): 36086 I/Os completed (+3681) 00:10:09.574 QEMU NVMe Ctrl (12341 ): 35800 I/Os completed (+3692) 00:10:09.574 00:10:10.517 QEMU NVMe Ctrl (12340 ): 39788 I/Os completed (+3702) 00:10:10.517 QEMU NVMe Ctrl (12341 ): 39510 I/Os completed (+3710) 00:10:10.517 00:10:11.460 QEMU NVMe Ctrl (12340 ): 43468 I/Os completed (+3680) 00:10:11.460 QEMU NVMe Ctrl (12341 ): 43175 I/Os completed (+3665) 00:10:11.460 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.037 [2024-10-08 09:16:03.480516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:12.037 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:12.037 [2024-10-08 09:16:03.481483] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 [2024-10-08 09:16:03.481517] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 [2024-10-08 09:16:03.481531] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 [2024-10-08 09:16:03.481545] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:12.037 [2024-10-08 09:16:03.483053] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 [2024-10-08 09:16:03.483092] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 [2024-10-08 09:16:03.483104] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 [2024-10-08 09:16:03.483116] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.037 [2024-10-08 09:16:03.503409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:12.037 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:12.037 [2024-10-08 09:16:03.504273] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 [2024-10-08 09:16:03.504309] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 [2024-10-08 09:16:03.504325] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 [2024-10-08 09:16:03.504338] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:12.037 [2024-10-08 09:16:03.505700] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 [2024-10-08 09:16:03.505730] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 [2024-10-08 09:16:03.505744] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 [2024-10-08 09:16:03.505754] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:12.037 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:12.037 EAL: Scan for (pci) bus failed. 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.037 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:12.037 Attaching to 0000:00:10.0 00:10:12.037 Attached to 0000:00:10.0 00:10:12.298 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:12.298 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.298 09:16:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:12.298 Attaching to 0000:00:11.0 00:10:12.298 Attached to 0000:00:11.0 00:10:12.298 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:12.298 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:12.298 [2024-10-08 09:16:03.751398] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:24.553 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:24.553 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.553 09:16:15 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.83 00:10:24.553 09:16:15 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.83 00:10:24.553 09:16:15 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:24.553 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.83 00:10:24.553 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.83 2 00:10:24.553 remove_attach_helper took 42.83s to complete (handling 2 nvme drive(s)) 09:16:15 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:31.145 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67031 00:10:31.145 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67031) - No such process 00:10:31.145 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67031 00:10:31.145 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:31.145 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:31.145 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:31.145 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67579 00:10:31.145 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:31.145 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67579 00:10:31.145 09:16:21 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:31.145 09:16:21 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67579 ']' 00:10:31.145 09:16:21 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.145 09:16:21 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.145 09:16:21 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.145 09:16:21 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.145 09:16:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:31.145 [2024-10-08 09:16:21.829082] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:10:31.145 [2024-10-08 09:16:21.829206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67579 ] 00:10:31.145 [2024-10-08 09:16:21.971084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.145 [2024-10-08 09:16:22.144489] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.145 09:16:22 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.145 09:16:22 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:31.145 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:31.145 09:16:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.145 09:16:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:31.145 09:16:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.145 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:31.145 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:31.145 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:31.145 09:16:22 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:31.145 09:16:22 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:31.145 09:16:22 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:31.145 09:16:22 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:31.145 09:16:22 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:31.145 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:31.145 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:31.145 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:31.145 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:31.145 09:16:22 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.739 09:16:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.739 09:16:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.739 09:16:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:37.739 09:16:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:37.739 [2024-10-08 09:16:28.823082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:37.739 [2024-10-08 09:16:28.824275] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.739 [2024-10-08 09:16:28.824311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.739 [2024-10-08 09:16:28.824324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.739 [2024-10-08 09:16:28.824341] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.739 [2024-10-08 09:16:28.824348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.739 [2024-10-08 09:16:28.824357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.739 [2024-10-08 09:16:28.824363] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.739 [2024-10-08 09:16:28.824371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.739 [2024-10-08 09:16:28.824378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.739 [2024-10-08 09:16:28.824398] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.739 [2024-10-08 09:16:28.824404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.739 [2024-10-08 09:16:28.824412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.739 [2024-10-08 09:16:29.223080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:37.739 [2024-10-08 09:16:29.224281] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.739 [2024-10-08 09:16:29.224313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.739 [2024-10-08 09:16:29.224324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.739 [2024-10-08 09:16:29.224339] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.739 [2024-10-08 09:16:29.224347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.739 [2024-10-08 09:16:29.224354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.739 [2024-10-08 09:16:29.224363] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.739 [2024-10-08 09:16:29.224370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.739 [2024-10-08 09:16:29.224377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.739 [2024-10-08 09:16:29.224384] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.739 [2024-10-08 09:16:29.224407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.739 [2024-10-08 09:16:29.224414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.739 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:37.740 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:37.740 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:37.740 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.740 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.740 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.740 09:16:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.740 09:16:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.740 09:16:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.740 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:37.740 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:37.740 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:37.740 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:37.740 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:37.999 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:37.999 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:37.999 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:37.999 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:37.999 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:37.999 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:37.999 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:37.999 09:16:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.260 09:16:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.260 09:16:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 09:16:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.260 09:16:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.260 09:16:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 09:16:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:50.260 09:16:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:50.260 [2024-10-08 09:16:41.723296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:50.260 [2024-10-08 09:16:41.724498] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.260 [2024-10-08 09:16:41.724531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.260 [2024-10-08 09:16:41.724541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.260 [2024-10-08 09:16:41.724558] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.260 [2024-10-08 09:16:41.724566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.260 [2024-10-08 09:16:41.724574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.260 [2024-10-08 09:16:41.724581] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.260 [2024-10-08 09:16:41.724589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.260 [2024-10-08 09:16:41.724595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.260 [2024-10-08 09:16:41.724603] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.260 [2024-10-08 09:16:41.724610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.260 [2024-10-08 09:16:41.724617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.521 [2024-10-08 09:16:42.123289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:50.521 [2024-10-08 09:16:42.124474] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.522 [2024-10-08 09:16:42.124504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.522 [2024-10-08 09:16:42.124516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.522 [2024-10-08 09:16:42.124529] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.522 [2024-10-08 09:16:42.124537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.522 [2024-10-08 09:16:42.124544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.522 [2024-10-08 09:16:42.124553] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.522 [2024-10-08 09:16:42.124560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.522 [2024-10-08 09:16:42.124567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.522 [2024-10-08 09:16:42.124574] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.522 [2024-10-08 09:16:42.124582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.522 [2024-10-08 09:16:42.124588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.522 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:50.522 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.522 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.522 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.522 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.522 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.522 09:16:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.522 09:16:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.522 09:16:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:50.783 09:16:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.017 09:16:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.017 09:16:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.017 09:16:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.017 [2024-10-08 09:16:54.523534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:03.017 [2024-10-08 09:16:54.525025] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.017 [2024-10-08 09:16:54.525061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.017 [2024-10-08 09:16:54.525072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.017 [2024-10-08 09:16:54.525089] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.017 [2024-10-08 09:16:54.525097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.017 [2024-10-08 09:16:54.525107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.017 [2024-10-08 09:16:54.525114] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.017 [2024-10-08 09:16:54.525122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.017 [2024-10-08 09:16:54.525128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.017 [2024-10-08 09:16:54.525137] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.017 [2024-10-08 09:16:54.525143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.017 [2024-10-08 09:16:54.525151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.017 09:16:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.017 09:16:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.017 09:16:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:03.017 09:16:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:03.589 [2024-10-08 09:16:55.023538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:03.589 [2024-10-08 09:16:55.024763] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.589 [2024-10-08 09:16:55.024796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.589 [2024-10-08 09:16:55.024808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.589 [2024-10-08 09:16:55.024824] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.589 [2024-10-08 09:16:55.024833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.589 [2024-10-08 09:16:55.024840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.589 [2024-10-08 09:16:55.024848] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.589 [2024-10-08 09:16:55.024855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.589 [2024-10-08 09:16:55.024864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.589 [2024-10-08 09:16:55.024871] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.589 [2024-10-08 09:16:55.024879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.589 [2024-10-08 09:16:55.024885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.589 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:03.589 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.589 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.589 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.589 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.589 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.589 09:16:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.589 09:16:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.589 09:16:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.589 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:03.589 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:03.589 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.589 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.589 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:03.851 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:03.851 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.851 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.851 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.851 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:03.851 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:03.851 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.851 09:16:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.67 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.67 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.67 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.67 2 00:11:16.127 remove_attach_helper took 44.67s to complete (handling 2 nvme drive(s)) 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:16.127 09:17:07 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:16.127 09:17:07 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.749 09:17:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.749 09:17:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.749 09:17:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:22.749 09:17:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:22.749 [2024-10-08 09:17:13.521183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:22.749 [2024-10-08 09:17:13.522100] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.749 [2024-10-08 09:17:13.522136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.749 [2024-10-08 09:17:13.522146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.749 [2024-10-08 09:17:13.522163] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.749 [2024-10-08 09:17:13.522171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.749 [2024-10-08 09:17:13.522180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.749 [2024-10-08 09:17:13.522188] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.749 [2024-10-08 09:17:13.522196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.749 [2024-10-08 09:17:13.522203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.749 [2024-10-08 09:17:13.522211] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.749 [2024-10-08 09:17:13.522217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.749 [2024-10-08 09:17:13.522228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.749 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:22.749 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:22.749 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:22.749 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.749 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.749 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.749 09:17:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.749 09:17:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.749 [2024-10-08 09:17:14.021183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:22.749 [2024-10-08 09:17:14.022052] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.749 [2024-10-08 09:17:14.022083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.749 [2024-10-08 09:17:14.022094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.749 [2024-10-08 09:17:14.022108] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.749 [2024-10-08 09:17:14.022116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.749 [2024-10-08 09:17:14.022122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.749 [2024-10-08 09:17:14.022131] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.749 [2024-10-08 09:17:14.022138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.750 [2024-10-08 09:17:14.022145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.750 [2024-10-08 09:17:14.022153] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.750 [2024-10-08 09:17:14.022161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.750 [2024-10-08 09:17:14.022167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.750 09:17:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.750 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:22.750 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.009 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:23.009 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.009 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.009 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.009 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.009 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.009 09:17:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.009 09:17:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.009 09:17:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.009 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:23.009 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:23.009 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.009 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.009 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:23.270 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:23.270 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.270 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.270 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.270 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:23.270 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:23.270 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.270 09:17:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.496 09:17:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.496 09:17:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.496 09:17:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.496 09:17:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.496 09:17:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.496 09:17:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:35.496 09:17:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:35.496 [2024-10-08 09:17:26.921431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:35.496 [2024-10-08 09:17:26.922380] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.496 [2024-10-08 09:17:26.922425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.496 [2024-10-08 09:17:26.922436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.496 [2024-10-08 09:17:26.922453] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.496 [2024-10-08 09:17:26.922460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.496 [2024-10-08 09:17:26.922468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.496 [2024-10-08 09:17:26.922476] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.496 [2024-10-08 09:17:26.922483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.496 [2024-10-08 09:17:26.922490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.496 [2024-10-08 09:17:26.922498] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.496 [2024-10-08 09:17:26.922505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.496 [2024-10-08 09:17:26.922513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.757 [2024-10-08 09:17:27.321444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:35.757 [2024-10-08 09:17:27.322441] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.757 [2024-10-08 09:17:27.322473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.757 [2024-10-08 09:17:27.322485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.757 [2024-10-08 09:17:27.322499] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.757 [2024-10-08 09:17:27.322509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.757 [2024-10-08 09:17:27.322516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.757 [2024-10-08 09:17:27.322524] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.757 [2024-10-08 09:17:27.322531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.757 [2024-10-08 09:17:27.322539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.757 [2024-10-08 09:17:27.322546] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.757 [2024-10-08 09:17:27.322554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.757 [2024-10-08 09:17:27.322561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.757 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:35.757 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.757 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.757 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.757 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.757 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.757 09:17:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.757 09:17:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.758 09:17:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.019 09:17:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:48.249 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:48.249 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:48.249 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:48.249 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.249 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.249 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.249 09:17:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.249 09:17:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.249 09:17:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.249 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:48.249 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:48.249 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.249 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.249 [2024-10-08 09:17:39.721658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:48.249 [2024-10-08 09:17:39.722811] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.249 [2024-10-08 09:17:39.722848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.249 [2024-10-08 09:17:39.722858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.249 [2024-10-08 09:17:39.722877] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.249 [2024-10-08 09:17:39.722884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.249 [2024-10-08 09:17:39.722892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.250 [2024-10-08 09:17:39.722900] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.250 [2024-10-08 09:17:39.722909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.250 [2024-10-08 09:17:39.722916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.250 [2024-10-08 09:17:39.722923] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.250 [2024-10-08 09:17:39.722930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.250 [2024-10-08 09:17:39.722938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.250 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.250 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.250 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:48.250 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.250 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.250 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.250 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.250 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.250 09:17:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.250 09:17:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.250 09:17:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.250 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:48.250 09:17:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:48.510 [2024-10-08 09:17:40.121668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:48.510 [2024-10-08 09:17:40.122676] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.510 [2024-10-08 09:17:40.122708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.510 [2024-10-08 09:17:40.122720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.510 [2024-10-08 09:17:40.122735] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.510 [2024-10-08 09:17:40.122744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.510 [2024-10-08 09:17:40.122751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.510 [2024-10-08 09:17:40.122759] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.510 [2024-10-08 09:17:40.122766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.510 [2024-10-08 09:17:40.122774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.510 [2024-10-08 09:17:40.122781] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.510 [2024-10-08 09:17:40.122791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.510 [2024-10-08 09:17:40.122797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.771 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:48.771 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.771 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.771 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.771 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.771 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.771 09:17:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.771 09:17:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.771 09:17:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.771 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:48.771 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:48.771 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.771 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.771 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:48.771 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.057 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.057 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.057 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.057 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:49.057 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:49.057 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.057 09:17:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:01.338 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:01.338 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:01.338 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:01.338 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.338 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.338 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.338 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:01.338 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.15 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.15 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:01.338 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.15 00:12:01.338 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.15 2 00:12:01.338 remove_attach_helper took 45.15s to complete (handling 2 nvme drive(s)) 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:01.338 09:17:52 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67579 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67579 ']' 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67579 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67579 00:12:01.338 killing process with pid 67579 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67579' 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67579 00:12:01.338 09:17:52 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67579 00:12:02.271 09:17:53 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:02.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:03.097 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:03.097 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:03.097 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.097 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.097 00:12:03.097 real 2m29.378s 00:12:03.097 user 1m51.265s 00:12:03.097 sys 0m16.606s 00:12:03.097 ************************************ 00:12:03.097 09:17:54 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.097 09:17:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:03.097 END TEST sw_hotplug 00:12:03.097 ************************************ 00:12:03.356 09:17:54 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:03.356 09:17:54 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:03.356 09:17:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:03.356 09:17:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.356 09:17:54 -- common/autotest_common.sh@10 -- # set +x 00:12:03.356 ************************************ 00:12:03.356 START TEST nvme_xnvme 00:12:03.356 ************************************ 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:03.356 * Looking for test storage... 00:12:03.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:03.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.356 --rc genhtml_branch_coverage=1 00:12:03.356 --rc genhtml_function_coverage=1 00:12:03.356 --rc genhtml_legend=1 00:12:03.356 --rc geninfo_all_blocks=1 00:12:03.356 --rc geninfo_unexecuted_blocks=1 00:12:03.356 00:12:03.356 ' 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:03.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.356 --rc genhtml_branch_coverage=1 00:12:03.356 --rc genhtml_function_coverage=1 00:12:03.356 --rc genhtml_legend=1 00:12:03.356 --rc geninfo_all_blocks=1 00:12:03.356 --rc geninfo_unexecuted_blocks=1 00:12:03.356 00:12:03.356 ' 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:03.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.356 --rc genhtml_branch_coverage=1 00:12:03.356 --rc genhtml_function_coverage=1 00:12:03.356 --rc genhtml_legend=1 00:12:03.356 --rc geninfo_all_blocks=1 00:12:03.356 --rc geninfo_unexecuted_blocks=1 00:12:03.356 00:12:03.356 ' 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:03.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.356 --rc genhtml_branch_coverage=1 00:12:03.356 --rc genhtml_function_coverage=1 00:12:03.356 --rc genhtml_legend=1 00:12:03.356 --rc geninfo_all_blocks=1 00:12:03.356 --rc geninfo_unexecuted_blocks=1 00:12:03.356 00:12:03.356 ' 00:12:03.356 09:17:54 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.356 09:17:54 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.356 09:17:54 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.356 09:17:54 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.356 09:17:54 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.356 09:17:54 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:03.356 09:17:54 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.356 09:17:54 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.356 09:17:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:03.356 ************************************ 00:12:03.356 START TEST xnvme_to_malloc_dd_copy 00:12:03.356 ************************************ 00:12:03.356 09:17:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:03.356 09:17:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:03.356 09:17:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:03.356 09:17:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:03.356 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:03.356 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:03.356 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:03.356 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:03.356 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:03.356 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:03.356 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:03.356 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:03.356 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:03.356 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:03.356 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:03.356 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:03.357 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:03.357 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:03.357 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:03.357 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:03.357 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:03.357 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:03.357 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:03.357 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:03.357 09:17:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:03.615 { 00:12:03.615 "subsystems": [ 00:12:03.615 { 00:12:03.615 "subsystem": "bdev", 00:12:03.615 "config": [ 00:12:03.615 { 00:12:03.615 "params": { 00:12:03.615 "block_size": 512, 00:12:03.615 "num_blocks": 2097152, 00:12:03.615 "name": "malloc0" 00:12:03.615 }, 00:12:03.615 "method": "bdev_malloc_create" 00:12:03.615 }, 00:12:03.615 { 00:12:03.615 "params": { 00:12:03.615 "io_mechanism": "libaio", 00:12:03.615 "filename": "/dev/nullb0", 00:12:03.615 "name": "null0" 00:12:03.615 }, 00:12:03.615 "method": "bdev_xnvme_create" 00:12:03.615 }, 00:12:03.615 { 00:12:03.615 "method": "bdev_wait_for_examine" 00:12:03.615 } 00:12:03.615 ] 00:12:03.615 } 00:12:03.615 ] 00:12:03.615 } 00:12:03.615 [2024-10-08 09:17:55.074653] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:12:03.615 [2024-10-08 09:17:55.074863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68961 ] 00:12:03.615 [2024-10-08 09:17:55.226670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.873 [2024-10-08 09:17:55.411121] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.772  [2024-10-08T09:17:58.388Z] Copying: 232/1024 [MB] (232 MBps) [2024-10-08T09:17:59.795Z] Copying: 477/1024 [MB] (245 MBps) [2024-10-08T09:18:00.361Z] Copying: 779/1024 [MB] (301 MBps) [2024-10-08T09:18:02.263Z] Copying: 1024/1024 [MB] (average 268 MBps) 00:12:10.580 00:12:10.580 09:18:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:10.580 09:18:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:10.580 09:18:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:10.580 09:18:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:10.580 { 00:12:10.580 "subsystems": [ 00:12:10.580 { 00:12:10.580 "subsystem": "bdev", 00:12:10.580 "config": [ 00:12:10.580 { 00:12:10.580 "params": { 00:12:10.580 "block_size": 512, 00:12:10.580 "num_blocks": 2097152, 00:12:10.580 "name": "malloc0" 00:12:10.580 }, 00:12:10.580 "method": "bdev_malloc_create" 00:12:10.580 }, 00:12:10.580 { 00:12:10.580 "params": { 00:12:10.580 "io_mechanism": "libaio", 00:12:10.580 "filename": "/dev/nullb0", 00:12:10.580 "name": "null0" 00:12:10.580 }, 00:12:10.580 "method": "bdev_xnvme_create" 00:12:10.580 }, 00:12:10.580 { 00:12:10.580 "method": "bdev_wait_for_examine" 00:12:10.580 } 00:12:10.580 ] 00:12:10.580 } 00:12:10.580 ] 00:12:10.580 } 00:12:10.580 [2024-10-08 09:18:02.211623] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:12:10.581 [2024-10-08 09:18:02.211742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69043 ] 00:12:10.838 [2024-10-08 09:18:02.359320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.096 [2024-10-08 09:18:02.539733] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.995  [2024-10-08T09:18:05.619Z] Copying: 232/1024 [MB] (232 MBps) [2024-10-08T09:18:06.563Z] Copying: 465/1024 [MB] (232 MBps) [2024-10-08T09:18:07.498Z] Copying: 733/1024 [MB] (268 MBps) [2024-10-08T09:18:09.405Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:12:17.722 00:12:17.982 09:18:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:17.982 09:18:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:17.982 09:18:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:17.982 09:18:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:17.982 09:18:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:17.982 09:18:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:17.982 { 00:12:17.982 "subsystems": [ 00:12:17.982 { 00:12:17.982 "subsystem": "bdev", 00:12:17.982 "config": [ 00:12:17.982 { 00:12:17.982 "params": { 00:12:17.982 "block_size": 512, 00:12:17.982 "num_blocks": 2097152, 00:12:17.982 "name": "malloc0" 00:12:17.982 }, 00:12:17.982 "method": "bdev_malloc_create" 00:12:17.982 }, 00:12:17.982 { 00:12:17.982 "params": { 00:12:17.982 "io_mechanism": "io_uring", 00:12:17.982 "filename": "/dev/nullb0", 00:12:17.982 "name": "null0" 00:12:17.982 }, 00:12:17.982 "method": "bdev_xnvme_create" 00:12:17.982 }, 00:12:17.982 { 00:12:17.982 "method": "bdev_wait_for_examine" 00:12:17.982 } 00:12:17.982 ] 00:12:17.982 } 00:12:17.982 ] 00:12:17.982 } 00:12:17.982 [2024-10-08 09:18:09.494372] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:12:17.982 [2024-10-08 09:18:09.494499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69131 ] 00:12:17.982 [2024-10-08 09:18:09.642420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.243 [2024-10-08 09:18:09.788910] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.155  [2024-10-08T09:18:12.778Z] Copying: 308/1024 [MB] (308 MBps) [2024-10-08T09:18:13.734Z] Copying: 618/1024 [MB] (309 MBps) [2024-10-08T09:18:14.000Z] Copying: 929/1024 [MB] (311 MBps) [2024-10-08T09:18:15.902Z] Copying: 1024/1024 [MB] (average 310 MBps) 00:12:24.219 00:12:24.219 09:18:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:24.219 09:18:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:24.219 09:18:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:24.220 09:18:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:24.220 { 00:12:24.220 "subsystems": [ 00:12:24.220 { 00:12:24.220 "subsystem": "bdev", 00:12:24.220 "config": [ 00:12:24.220 { 00:12:24.220 "params": { 00:12:24.220 "block_size": 512, 00:12:24.220 "num_blocks": 2097152, 00:12:24.220 "name": "malloc0" 00:12:24.220 }, 00:12:24.220 "method": "bdev_malloc_create" 00:12:24.220 }, 00:12:24.220 { 00:12:24.220 "params": { 00:12:24.220 "io_mechanism": "io_uring", 00:12:24.220 "filename": "/dev/nullb0", 00:12:24.220 "name": "null0" 00:12:24.220 }, 00:12:24.220 "method": "bdev_xnvme_create" 00:12:24.220 }, 00:12:24.220 { 00:12:24.220 "method": "bdev_wait_for_examine" 00:12:24.220 } 00:12:24.220 ] 00:12:24.220 } 00:12:24.220 ] 00:12:24.220 } 00:12:24.480 [2024-10-08 09:18:15.904760] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:12:24.480 [2024-10-08 09:18:15.904867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69208 ] 00:12:24.480 [2024-10-08 09:18:16.053574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.741 [2024-10-08 09:18:16.196727] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.657  [2024-10-08T09:18:19.281Z] Copying: 316/1024 [MB] (316 MBps) [2024-10-08T09:18:20.225Z] Copying: 633/1024 [MB] (316 MBps) [2024-10-08T09:18:20.225Z] Copying: 950/1024 [MB] (317 MBps) [2024-10-08T09:18:22.775Z] Copying: 1024/1024 [MB] (average 316 MBps) 00:12:31.092 00:12:31.092 09:18:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:31.092 09:18:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:31.092 ************************************ 00:12:31.092 END TEST xnvme_to_malloc_dd_copy 00:12:31.092 ************************************ 00:12:31.092 00:12:31.092 real 0m27.231s 00:12:31.092 user 0m24.069s 00:12:31.092 sys 0m2.623s 00:12:31.092 09:18:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.092 09:18:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:31.092 09:18:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:31.092 09:18:22 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:31.092 09:18:22 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.092 09:18:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.092 ************************************ 00:12:31.092 START TEST xnvme_bdevperf 00:12:31.092 ************************************ 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:31.092 09:18:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:31.092 { 00:12:31.092 "subsystems": [ 00:12:31.092 { 00:12:31.092 "subsystem": "bdev", 00:12:31.092 "config": [ 00:12:31.092 { 00:12:31.092 "params": { 00:12:31.092 "io_mechanism": "libaio", 00:12:31.092 "filename": "/dev/nullb0", 00:12:31.092 "name": "null0" 00:12:31.092 }, 00:12:31.092 "method": "bdev_xnvme_create" 00:12:31.092 }, 00:12:31.092 { 00:12:31.092 "method": "bdev_wait_for_examine" 00:12:31.092 } 00:12:31.092 ] 00:12:31.092 } 00:12:31.092 ] 00:12:31.092 } 00:12:31.092 [2024-10-08 09:18:22.369779] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:12:31.092 [2024-10-08 09:18:22.370012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69307 ] 00:12:31.092 [2024-10-08 09:18:22.523261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.092 [2024-10-08 09:18:22.750197] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.665 Running I/O for 5 seconds... 00:12:33.605 154304.00 IOPS, 602.75 MiB/s [2024-10-08T09:18:26.233Z] 168320.00 IOPS, 657.50 MiB/s [2024-10-08T09:18:27.177Z] 179754.67 IOPS, 702.17 MiB/s [2024-10-08T09:18:28.121Z] 185472.00 IOPS, 724.50 MiB/s 00:12:36.439 Latency(us) 00:12:36.439 [2024-10-08T09:18:28.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.439 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:36.439 null0 : 5.00 188870.07 737.77 0.00 0.00 336.48 111.85 2054.30 00:12:36.439 [2024-10-08T09:18:28.122Z] =================================================================================================================== 00:12:36.439 [2024-10-08T09:18:28.122Z] Total : 188870.07 737.77 0.00 0.00 336.48 111.85 2054.30 00:12:37.008 09:18:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:37.008 09:18:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:37.008 09:18:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:37.008 09:18:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:37.008 09:18:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.008 09:18:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.269 { 00:12:37.269 "subsystems": [ 00:12:37.269 { 00:12:37.269 "subsystem": "bdev", 00:12:37.269 "config": [ 00:12:37.269 { 00:12:37.269 "params": { 00:12:37.269 "io_mechanism": "io_uring", 00:12:37.269 "filename": "/dev/nullb0", 00:12:37.269 "name": "null0" 00:12:37.269 }, 00:12:37.269 "method": "bdev_xnvme_create" 00:12:37.269 }, 00:12:37.269 { 00:12:37.269 "method": "bdev_wait_for_examine" 00:12:37.269 } 00:12:37.269 ] 00:12:37.269 } 00:12:37.269 ] 00:12:37.269 } 00:12:37.269 [2024-10-08 09:18:28.743285] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:12:37.269 [2024-10-08 09:18:28.743415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69387 ] 00:12:37.269 [2024-10-08 09:18:28.894293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.530 [2024-10-08 09:18:29.112961] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.793 Running I/O for 5 seconds... 00:12:39.679 229888.00 IOPS, 898.00 MiB/s [2024-10-08T09:18:32.747Z] 230112.00 IOPS, 898.88 MiB/s [2024-10-08T09:18:33.321Z] 230250.67 IOPS, 899.42 MiB/s [2024-10-08T09:18:34.769Z] 230400.00 IOPS, 900.00 MiB/s 00:12:43.086 Latency(us) 00:12:43.086 [2024-10-08T09:18:34.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.086 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:43.086 null0 : 5.00 230429.54 900.12 0.00 0.00 275.34 151.24 2381.98 00:12:43.086 [2024-10-08T09:18:34.769Z] =================================================================================================================== 00:12:43.086 [2024-10-08T09:18:34.769Z] Total : 230429.54 900.12 0.00 0.00 275.34 151.24 2381.98 00:12:43.343 09:18:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:43.343 09:18:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:43.343 ************************************ 00:12:43.343 END TEST xnvme_bdevperf 00:12:43.343 ************************************ 00:12:43.343 00:12:43.343 real 0m12.695s 00:12:43.343 user 0m10.252s 00:12:43.343 sys 0m2.197s 00:12:43.343 09:18:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.343 09:18:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:43.602 ************************************ 00:12:43.602 END TEST nvme_xnvme 00:12:43.602 ************************************ 00:12:43.602 00:12:43.602 real 0m40.194s 00:12:43.602 user 0m34.439s 00:12:43.602 sys 0m4.932s 00:12:43.602 09:18:35 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.602 09:18:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.602 09:18:35 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:43.602 09:18:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:43.602 09:18:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.602 09:18:35 -- common/autotest_common.sh@10 -- # set +x 00:12:43.602 ************************************ 00:12:43.602 START TEST blockdev_xnvme 00:12:43.602 ************************************ 00:12:43.602 09:18:35 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:43.602 * Looking for test storage... 00:12:43.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:43.602 09:18:35 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:43.602 09:18:35 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:43.602 09:18:35 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:43.602 09:18:35 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.602 09:18:35 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:43.602 09:18:35 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.602 09:18:35 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:43.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.602 --rc genhtml_branch_coverage=1 00:12:43.602 --rc genhtml_function_coverage=1 00:12:43.602 --rc genhtml_legend=1 00:12:43.602 --rc geninfo_all_blocks=1 00:12:43.602 --rc geninfo_unexecuted_blocks=1 00:12:43.602 00:12:43.602 ' 00:12:43.602 09:18:35 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:43.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.602 --rc genhtml_branch_coverage=1 00:12:43.602 --rc genhtml_function_coverage=1 00:12:43.602 --rc genhtml_legend=1 00:12:43.602 --rc geninfo_all_blocks=1 00:12:43.602 --rc geninfo_unexecuted_blocks=1 00:12:43.602 00:12:43.602 ' 00:12:43.602 09:18:35 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:43.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.602 --rc genhtml_branch_coverage=1 00:12:43.602 --rc genhtml_function_coverage=1 00:12:43.602 --rc genhtml_legend=1 00:12:43.602 --rc geninfo_all_blocks=1 00:12:43.602 --rc geninfo_unexecuted_blocks=1 00:12:43.602 00:12:43.602 ' 00:12:43.602 09:18:35 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:43.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.602 --rc genhtml_branch_coverage=1 00:12:43.602 --rc genhtml_function_coverage=1 00:12:43.602 --rc genhtml_legend=1 00:12:43.602 --rc geninfo_all_blocks=1 00:12:43.602 --rc geninfo_unexecuted_blocks=1 00:12:43.602 00:12:43.602 ' 00:12:43.602 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:43.602 09:18:35 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:43.602 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:43.602 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:43.602 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:43.602 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:43.602 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:43.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69530 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69530 00:12:43.603 09:18:35 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69530 ']' 00:12:43.603 09:18:35 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.603 09:18:35 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:43.603 09:18:35 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.603 09:18:35 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:43.603 09:18:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.603 09:18:35 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:43.861 [2024-10-08 09:18:35.314773] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:12:43.861 [2024-10-08 09:18:35.314895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69530 ] 00:12:43.861 [2024-10-08 09:18:35.461231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.120 [2024-10-08 09:18:35.636089] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.692 09:18:36 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:44.692 09:18:36 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:44.692 09:18:36 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:44.692 09:18:36 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:44.692 09:18:36 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:44.692 09:18:36 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:44.692 09:18:36 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:44.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:45.212 Waiting for block devices as requested 00:12:45.212 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.212 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.212 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.469 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.736 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:50.736 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.736 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.737 09:18:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.737 09:18:41 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:50.737 nvme0n1 00:12:50.737 nvme1n1 00:12:50.737 nvme2n1 00:12:50.737 nvme2n2 00:12:50.737 nvme2n3 00:12:50.737 nvme3n1 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.737 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.737 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:50.737 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.737 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.737 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.737 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:50.737 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.737 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:50.737 09:18:42 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.737 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:50.737 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:50.738 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1ee5e974-f45f-41d9-a8f1-bc19ba757c2b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1ee5e974-f45f-41d9-a8f1-bc19ba757c2b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "f093c5ed-a642-4df8-8b97-0ecc4e3ba48e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f093c5ed-a642-4df8-8b97-0ecc4e3ba48e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "dc2c4758-044b-41ae-a384-2d6b269d4edd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc2c4758-044b-41ae-a384-2d6b269d4edd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "5334c162-5f09-4bad-a01c-2fc6a72c9183"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5334c162-5f09-4bad-a01c-2fc6a72c9183",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "5158127c-3586-47b8-a2f3-0e519a4af242"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5158127c-3586-47b8-a2f3-0e519a4af242",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6505aafb-291e-456c-bac2-7c72caef8908"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6505aafb-291e-456c-bac2-7c72caef8908",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:50.738 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:50.738 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:50.738 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:50.738 09:18:42 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69530 00:12:50.738 09:18:42 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69530 ']' 00:12:50.738 09:18:42 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69530 00:12:50.738 09:18:42 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:50.738 09:18:42 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.738 09:18:42 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69530 00:12:50.738 killing process with pid 69530 00:12:50.738 09:18:42 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.738 09:18:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.738 09:18:42 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69530' 00:12:50.738 09:18:42 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69530 00:12:50.738 09:18:42 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69530 00:12:52.111 09:18:43 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:52.111 09:18:43 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:52.111 09:18:43 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:52.111 09:18:43 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.111 09:18:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.111 ************************************ 00:12:52.111 START TEST bdev_hello_world 00:12:52.111 ************************************ 00:12:52.111 09:18:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:52.111 [2024-10-08 09:18:43.512041] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:12:52.111 [2024-10-08 09:18:43.512158] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69888 ] 00:12:52.111 [2024-10-08 09:18:43.661155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.369 [2024-10-08 09:18:43.801739] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.627 [2024-10-08 09:18:44.083080] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:52.627 [2024-10-08 09:18:44.083120] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:52.627 [2024-10-08 09:18:44.083132] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:52.627 [2024-10-08 09:18:44.084599] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:52.627 [2024-10-08 09:18:44.084895] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:52.627 [2024-10-08 09:18:44.084913] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:52.627 [2024-10-08 09:18:44.085200] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:52.627 00:12:52.627 [2024-10-08 09:18:44.085216] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:53.239 00:12:53.239 real 0m1.265s 00:12:53.239 user 0m0.984s 00:12:53.239 sys 0m0.170s 00:12:53.239 09:18:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.239 ************************************ 00:12:53.239 END TEST bdev_hello_world 00:12:53.239 ************************************ 00:12:53.239 09:18:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:53.239 09:18:44 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:53.239 09:18:44 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:53.239 09:18:44 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.239 09:18:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.239 ************************************ 00:12:53.239 START TEST bdev_bounds 00:12:53.239 ************************************ 00:12:53.239 09:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:53.239 09:18:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69919 00:12:53.239 09:18:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:53.239 09:18:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:53.239 Process bdevio pid: 69919 00:12:53.239 09:18:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69919' 00:12:53.239 09:18:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69919 00:12:53.239 09:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 69919 ']' 00:12:53.239 09:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.239 09:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:53.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.239 09:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.239 09:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:53.239 09:18:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:53.239 [2024-10-08 09:18:44.842872] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:12:53.239 [2024-10-08 09:18:44.842987] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69919 ] 00:12:53.497 [2024-10-08 09:18:44.993358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.497 [2024-10-08 09:18:45.175774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.497 [2024-10-08 09:18:45.176034] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.497 [2024-10-08 09:18:45.176054] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.061 09:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:54.061 09:18:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:54.061 09:18:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:54.318 I/O targets: 00:12:54.318 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:54.318 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:54.318 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.318 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.318 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:54.318 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:54.318 00:12:54.318 00:12:54.318 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.318 http://cunit.sourceforge.net/ 00:12:54.318 00:12:54.318 00:12:54.318 Suite: bdevio tests on: nvme3n1 00:12:54.318 Test: blockdev write read block ...passed 00:12:54.318 Test: blockdev write zeroes read block ...passed 00:12:54.318 Test: blockdev write zeroes read no split ...passed 00:12:54.318 Test: blockdev write zeroes read split ...passed 00:12:54.318 Test: blockdev write zeroes read split partial ...passed 00:12:54.318 Test: blockdev reset ...passed 00:12:54.318 Test: blockdev write read 8 blocks ...passed 00:12:54.318 Test: blockdev write read size > 128k ...passed 00:12:54.318 Test: blockdev write read invalid size ...passed 00:12:54.318 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.318 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.318 Test: blockdev write read max offset ...passed 00:12:54.318 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.318 Test: blockdev writev readv 8 blocks ...passed 00:12:54.318 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.318 Test: blockdev writev readv block ...passed 00:12:54.318 Test: blockdev writev readv size > 128k ...passed 00:12:54.318 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.318 Test: blockdev comparev and writev ...passed 00:12:54.318 Test: blockdev nvme passthru rw ...passed 00:12:54.318 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.318 Test: blockdev nvme admin passthru ...passed 00:12:54.318 Test: blockdev copy ...passed 00:12:54.318 Suite: bdevio tests on: nvme2n3 00:12:54.318 Test: blockdev write read block ...passed 00:12:54.318 Test: blockdev write zeroes read block ...passed 00:12:54.318 Test: blockdev write zeroes read no split ...passed 00:12:54.318 Test: blockdev write zeroes read split ...passed 00:12:54.318 Test: blockdev write zeroes read split partial ...passed 00:12:54.318 Test: blockdev reset ...passed 00:12:54.318 Test: blockdev write read 8 blocks ...passed 00:12:54.318 Test: blockdev write read size > 128k ...passed 00:12:54.318 Test: blockdev write read invalid size ...passed 00:12:54.318 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.318 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.318 Test: blockdev write read max offset ...passed 00:12:54.318 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.318 Test: blockdev writev readv 8 blocks ...passed 00:12:54.318 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.318 Test: blockdev writev readv block ...passed 00:12:54.318 Test: blockdev writev readv size > 128k ...passed 00:12:54.318 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.318 Test: blockdev comparev and writev ...passed 00:12:54.318 Test: blockdev nvme passthru rw ...passed 00:12:54.318 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.318 Test: blockdev nvme admin passthru ...passed 00:12:54.318 Test: blockdev copy ...passed 00:12:54.318 Suite: bdevio tests on: nvme2n2 00:12:54.318 Test: blockdev write read block ...passed 00:12:54.318 Test: blockdev write zeroes read block ...passed 00:12:54.318 Test: blockdev write zeroes read no split ...passed 00:12:54.318 Test: blockdev write zeroes read split ...passed 00:12:54.318 Test: blockdev write zeroes read split partial ...passed 00:12:54.318 Test: blockdev reset ...passed 00:12:54.318 Test: blockdev write read 8 blocks ...passed 00:12:54.318 Test: blockdev write read size > 128k ...passed 00:12:54.318 Test: blockdev write read invalid size ...passed 00:12:54.318 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.318 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.318 Test: blockdev write read max offset ...passed 00:12:54.318 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.318 Test: blockdev writev readv 8 blocks ...passed 00:12:54.318 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.319 Test: blockdev writev readv block ...passed 00:12:54.319 Test: blockdev writev readv size > 128k ...passed 00:12:54.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.319 Test: blockdev comparev and writev ...passed 00:12:54.319 Test: blockdev nvme passthru rw ...passed 00:12:54.319 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.319 Test: blockdev nvme admin passthru ...passed 00:12:54.319 Test: blockdev copy ...passed 00:12:54.319 Suite: bdevio tests on: nvme2n1 00:12:54.319 Test: blockdev write read block ...passed 00:12:54.319 Test: blockdev write zeroes read block ...passed 00:12:54.319 Test: blockdev write zeroes read no split ...passed 00:12:54.319 Test: blockdev write zeroes read split ...passed 00:12:54.319 Test: blockdev write zeroes read split partial ...passed 00:12:54.319 Test: blockdev reset ...passed 00:12:54.319 Test: blockdev write read 8 blocks ...passed 00:12:54.319 Test: blockdev write read size > 128k ...passed 00:12:54.319 Test: blockdev write read invalid size ...passed 00:12:54.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.319 Test: blockdev write read max offset ...passed 00:12:54.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.319 Test: blockdev writev readv 8 blocks ...passed 00:12:54.319 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.576 Test: blockdev writev readv block ...passed 00:12:54.576 Test: blockdev writev readv size > 128k ...passed 00:12:54.576 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.576 Test: blockdev comparev and writev ...passed 00:12:54.576 Test: blockdev nvme passthru rw ...passed 00:12:54.576 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.576 Test: blockdev nvme admin passthru ...passed 00:12:54.576 Test: blockdev copy ...passed 00:12:54.576 Suite: bdevio tests on: nvme1n1 00:12:54.576 Test: blockdev write read block ...passed 00:12:54.576 Test: blockdev write zeroes read block ...passed 00:12:54.576 Test: blockdev write zeroes read no split ...passed 00:12:54.576 Test: blockdev write zeroes read split ...passed 00:12:54.576 Test: blockdev write zeroes read split partial ...passed 00:12:54.576 Test: blockdev reset ...passed 00:12:54.576 Test: blockdev write read 8 blocks ...passed 00:12:54.576 Test: blockdev write read size > 128k ...passed 00:12:54.576 Test: blockdev write read invalid size ...passed 00:12:54.576 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.576 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.576 Test: blockdev write read max offset ...passed 00:12:54.576 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.576 Test: blockdev writev readv 8 blocks ...passed 00:12:54.576 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.576 Test: blockdev writev readv block ...passed 00:12:54.576 Test: blockdev writev readv size > 128k ...passed 00:12:54.576 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.576 Test: blockdev comparev and writev ...passed 00:12:54.577 Test: blockdev nvme passthru rw ...passed 00:12:54.577 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.577 Test: blockdev nvme admin passthru ...passed 00:12:54.577 Test: blockdev copy ...passed 00:12:54.577 Suite: bdevio tests on: nvme0n1 00:12:54.577 Test: blockdev write read block ...passed 00:12:54.577 Test: blockdev write zeroes read block ...passed 00:12:54.577 Test: blockdev write zeroes read no split ...passed 00:12:54.577 Test: blockdev write zeroes read split ...passed 00:12:54.577 Test: blockdev write zeroes read split partial ...passed 00:12:54.577 Test: blockdev reset ...passed 00:12:54.577 Test: blockdev write read 8 blocks ...passed 00:12:54.577 Test: blockdev write read size > 128k ...passed 00:12:54.577 Test: blockdev write read invalid size ...passed 00:12:54.577 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.577 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.577 Test: blockdev write read max offset ...passed 00:12:54.577 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.577 Test: blockdev writev readv 8 blocks ...passed 00:12:54.577 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.577 Test: blockdev writev readv block ...passed 00:12:54.577 Test: blockdev writev readv size > 128k ...passed 00:12:54.577 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.577 Test: blockdev comparev and writev ...passed 00:12:54.577 Test: blockdev nvme passthru rw ...passed 00:12:54.577 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.577 Test: blockdev nvme admin passthru ...passed 00:12:54.577 Test: blockdev copy ...passed 00:12:54.577 00:12:54.577 Run Summary: Type Total Ran Passed Failed Inactive 00:12:54.577 suites 6 6 n/a 0 0 00:12:54.577 tests 138 138 138 0 0 00:12:54.577 asserts 780 780 780 0 n/a 00:12:54.577 00:12:54.577 Elapsed time = 1.007 seconds 00:12:54.577 0 00:12:54.577 09:18:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69919 00:12:54.577 09:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 69919 ']' 00:12:54.577 09:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 69919 00:12:54.577 09:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:54.577 09:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:54.577 09:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69919 00:12:54.577 killing process with pid 69919 00:12:54.577 09:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:54.577 09:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:54.577 09:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69919' 00:12:54.577 09:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 69919 00:12:54.577 09:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 69919 00:12:55.511 09:18:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:55.511 00:12:55.511 real 0m2.107s 00:12:55.511 user 0m4.998s 00:12:55.511 sys 0m0.290s 00:12:55.511 09:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.511 09:18:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:55.511 ************************************ 00:12:55.511 END TEST bdev_bounds 00:12:55.511 ************************************ 00:12:55.511 09:18:46 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:55.511 09:18:46 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:55.511 09:18:46 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.511 09:18:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.511 ************************************ 00:12:55.511 START TEST bdev_nbd 00:12:55.511 ************************************ 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69981 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69981 /var/tmp/spdk-nbd.sock 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 69981 ']' 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:55.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:55.511 09:18:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:55.511 [2024-10-08 09:18:47.019666] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:12:55.511 [2024-10-08 09:18:47.019781] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.511 [2024-10-08 09:18:47.166945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.769 [2024-10-08 09:18:47.309628] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.335 09:18:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.593 1+0 records in 00:12:56.593 1+0 records out 00:12:56.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642546 s, 6.4 MB/s 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.593 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.850 1+0 records in 00:12:56.850 1+0 records out 00:12:56.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530186 s, 7.7 MB/s 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:56.850 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.107 1+0 records in 00:12:57.107 1+0 records out 00:12:57.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523933 s, 7.8 MB/s 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:57.107 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.108 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.108 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.365 1+0 records in 00:12:57.365 1+0 records out 00:12:57.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504304 s, 8.1 MB/s 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.365 09:18:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.365 1+0 records in 00:12:57.365 1+0 records out 00:12:57.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448176 s, 9.1 MB/s 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.365 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.623 1+0 records in 00:12:57.623 1+0 records out 00:12:57.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003587 s, 11.4 MB/s 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:57.623 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:57.880 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:57.880 { 00:12:57.880 "nbd_device": "/dev/nbd0", 00:12:57.880 "bdev_name": "nvme0n1" 00:12:57.880 }, 00:12:57.880 { 00:12:57.880 "nbd_device": "/dev/nbd1", 00:12:57.880 "bdev_name": "nvme1n1" 00:12:57.880 }, 00:12:57.880 { 00:12:57.880 "nbd_device": "/dev/nbd2", 00:12:57.880 "bdev_name": "nvme2n1" 00:12:57.880 }, 00:12:57.880 { 00:12:57.880 "nbd_device": "/dev/nbd3", 00:12:57.880 "bdev_name": "nvme2n2" 00:12:57.880 }, 00:12:57.880 { 00:12:57.880 "nbd_device": "/dev/nbd4", 00:12:57.880 "bdev_name": "nvme2n3" 00:12:57.880 }, 00:12:57.880 { 00:12:57.880 "nbd_device": "/dev/nbd5", 00:12:57.880 "bdev_name": "nvme3n1" 00:12:57.880 } 00:12:57.880 ]' 00:12:57.880 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:57.880 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:57.880 { 00:12:57.880 "nbd_device": "/dev/nbd0", 00:12:57.880 "bdev_name": "nvme0n1" 00:12:57.880 }, 00:12:57.880 { 00:12:57.880 "nbd_device": "/dev/nbd1", 00:12:57.880 "bdev_name": "nvme1n1" 00:12:57.880 }, 00:12:57.880 { 00:12:57.880 "nbd_device": "/dev/nbd2", 00:12:57.880 "bdev_name": "nvme2n1" 00:12:57.880 }, 00:12:57.880 { 00:12:57.880 "nbd_device": "/dev/nbd3", 00:12:57.880 "bdev_name": "nvme2n2" 00:12:57.880 }, 00:12:57.880 { 00:12:57.880 "nbd_device": "/dev/nbd4", 00:12:57.880 "bdev_name": "nvme2n3" 00:12:57.880 }, 00:12:57.880 { 00:12:57.880 "nbd_device": "/dev/nbd5", 00:12:57.880 "bdev_name": "nvme3n1" 00:12:57.880 } 00:12:57.880 ]' 00:12:57.880 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:57.880 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:57.880 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.880 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:57.880 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.880 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:57.880 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.881 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:58.139 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.139 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.139 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.139 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.139 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.139 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.139 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.139 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.139 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.139 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:58.400 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.400 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.400 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.400 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.400 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.400 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.400 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.400 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.400 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.400 09:18:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.661 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:58.923 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:58.923 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:58.923 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:58.923 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.923 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.923 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:58.923 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.923 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.923 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.923 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:59.184 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:59.184 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:59.184 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:59.184 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.184 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.184 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:59.184 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.184 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.184 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:59.184 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.184 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:59.443 09:18:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:59.701 /dev/nbd0 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.701 1+0 records in 00:12:59.701 1+0 records out 00:12:59.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316172 s, 13.0 MB/s 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:59.701 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:59.701 /dev/nbd1 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.959 1+0 records in 00:12:59.959 1+0 records out 00:12:59.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416958 s, 9.8 MB/s 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:59.959 /dev/nbd10 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.959 1+0 records in 00:12:59.959 1+0 records out 00:12:59.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296714 s, 13.8 MB/s 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:59.959 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:00.228 /dev/nbd11 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.228 1+0 records in 00:13:00.228 1+0 records out 00:13:00.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401221 s, 10.2 MB/s 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.228 09:18:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:00.503 /dev/nbd12 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.503 1+0 records in 00:13:00.503 1+0 records out 00:13:00.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474696 s, 8.6 MB/s 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.503 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:00.760 /dev/nbd13 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.760 1+0 records in 00:13:00.760 1+0 records out 00:13:00.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439384 s, 9.3 MB/s 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:00.760 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:00.761 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:01.018 { 00:13:01.018 "nbd_device": "/dev/nbd0", 00:13:01.018 "bdev_name": "nvme0n1" 00:13:01.018 }, 00:13:01.018 { 00:13:01.018 "nbd_device": "/dev/nbd1", 00:13:01.018 "bdev_name": "nvme1n1" 00:13:01.018 }, 00:13:01.018 { 00:13:01.018 "nbd_device": "/dev/nbd10", 00:13:01.018 "bdev_name": "nvme2n1" 00:13:01.018 }, 00:13:01.018 { 00:13:01.018 "nbd_device": "/dev/nbd11", 00:13:01.018 "bdev_name": "nvme2n2" 00:13:01.018 }, 00:13:01.018 { 00:13:01.018 "nbd_device": "/dev/nbd12", 00:13:01.018 "bdev_name": "nvme2n3" 00:13:01.018 }, 00:13:01.018 { 00:13:01.018 "nbd_device": "/dev/nbd13", 00:13:01.018 "bdev_name": "nvme3n1" 00:13:01.018 } 00:13:01.018 ]' 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:01.018 { 00:13:01.018 "nbd_device": "/dev/nbd0", 00:13:01.018 "bdev_name": "nvme0n1" 00:13:01.018 }, 00:13:01.018 { 00:13:01.018 "nbd_device": "/dev/nbd1", 00:13:01.018 "bdev_name": "nvme1n1" 00:13:01.018 }, 00:13:01.018 { 00:13:01.018 "nbd_device": "/dev/nbd10", 00:13:01.018 "bdev_name": "nvme2n1" 00:13:01.018 }, 00:13:01.018 { 00:13:01.018 "nbd_device": "/dev/nbd11", 00:13:01.018 "bdev_name": "nvme2n2" 00:13:01.018 }, 00:13:01.018 { 00:13:01.018 "nbd_device": "/dev/nbd12", 00:13:01.018 "bdev_name": "nvme2n3" 00:13:01.018 }, 00:13:01.018 { 00:13:01.018 "nbd_device": "/dev/nbd13", 00:13:01.018 "bdev_name": "nvme3n1" 00:13:01.018 } 00:13:01.018 ]' 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:01.018 /dev/nbd1 00:13:01.018 /dev/nbd10 00:13:01.018 /dev/nbd11 00:13:01.018 /dev/nbd12 00:13:01.018 /dev/nbd13' 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:01.018 /dev/nbd1 00:13:01.018 /dev/nbd10 00:13:01.018 /dev/nbd11 00:13:01.018 /dev/nbd12 00:13:01.018 /dev/nbd13' 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:01.018 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:01.018 256+0 records in 00:13:01.018 256+0 records out 00:13:01.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00925973 s, 113 MB/s 00:13:01.019 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.019 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:01.019 256+0 records in 00:13:01.019 256+0 records out 00:13:01.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0688406 s, 15.2 MB/s 00:13:01.019 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.019 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:01.276 256+0 records in 00:13:01.276 256+0 records out 00:13:01.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0702748 s, 14.9 MB/s 00:13:01.276 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.276 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:01.276 256+0 records in 00:13:01.276 256+0 records out 00:13:01.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0599115 s, 17.5 MB/s 00:13:01.276 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.276 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:01.276 256+0 records in 00:13:01.276 256+0 records out 00:13:01.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.066764 s, 15.7 MB/s 00:13:01.276 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.276 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:01.276 256+0 records in 00:13:01.276 256+0 records out 00:13:01.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0589483 s, 17.8 MB/s 00:13:01.276 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:01.276 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:01.534 256+0 records in 00:13:01.534 256+0 records out 00:13:01.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0637431 s, 16.5 MB/s 00:13:01.534 09:18:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.534 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.793 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:02.052 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:02.052 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:02.052 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:02.052 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.052 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.052 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:02.052 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.052 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.052 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.052 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:02.310 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:02.310 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:02.310 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:02.310 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.310 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.310 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:02.310 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.310 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.310 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.310 09:18:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:02.568 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:02.568 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:02.568 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:02.568 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.568 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.568 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:02.568 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.568 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.568 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.568 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:02.827 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:03.085 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:03.085 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:03.085 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:03.085 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:03.085 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:03.085 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:03.085 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:03.085 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.085 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:03.085 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:03.085 malloc_lvol_verify 00:13:03.085 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:03.343 0750c32f-d12b-481e-9634-24a574140a11 00:13:03.343 09:18:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:03.601 1c5c56bf-745d-4f3d-bf67-a70b9b85ed06 00:13:03.601 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:03.859 /dev/nbd0 00:13:03.859 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:03.859 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:03.859 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:03.859 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:03.859 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:03.859 mke2fs 1.47.0 (5-Feb-2023) 00:13:03.859 Discarding device blocks: 0/4096 done 00:13:03.859 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:03.859 00:13:03.859 Allocating group tables: 0/1 done 00:13:03.859 Writing inode tables: 0/1 done 00:13:03.859 Creating journal (1024 blocks): done 00:13:03.859 Writing superblocks and filesystem accounting information: 0/1 done 00:13:03.859 00:13:03.859 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:03.859 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.859 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:03.859 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.859 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:03.859 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.859 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69981 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 69981 ']' 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 69981 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69981 00:13:04.116 killing process with pid 69981 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69981' 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 69981 00:13:04.116 09:18:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 69981 00:13:05.051 ************************************ 00:13:05.051 END TEST bdev_nbd 00:13:05.051 ************************************ 00:13:05.051 09:18:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:05.051 00:13:05.051 real 0m9.486s 00:13:05.051 user 0m13.472s 00:13:05.051 sys 0m3.083s 00:13:05.052 09:18:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.052 09:18:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:05.052 09:18:56 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:05.052 09:18:56 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:05.052 09:18:56 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:05.052 09:18:56 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:05.052 09:18:56 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:05.052 09:18:56 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.052 09:18:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:05.052 ************************************ 00:13:05.052 START TEST bdev_fio 00:13:05.052 ************************************ 00:13:05.052 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:05.052 ************************************ 00:13:05.052 START TEST bdev_fio_rw_verify 00:13:05.052 ************************************ 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:05.052 09:18:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:05.310 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:05.310 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:05.310 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:05.310 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:05.310 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:05.310 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:05.310 fio-3.35 00:13:05.310 Starting 6 threads 00:13:17.529 00:13:17.529 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70372: Tue Oct 8 09:19:07 2024 00:13:17.529 read: IOPS=32.3k, BW=126MiB/s (132MB/s)(1262MiB/10002msec) 00:13:17.529 slat (usec): min=2, max=3199, avg= 4.94, stdev=10.55 00:13:17.529 clat (usec): min=76, max=6924, avg=573.00, stdev=535.39 00:13:17.529 lat (usec): min=80, max=6928, avg=577.94, stdev=535.94 00:13:17.529 clat percentiles (usec): 00:13:17.529 | 50.000th=[ 412], 99.000th=[ 2802], 99.900th=[ 4293], 99.990th=[ 5800], 00:13:17.529 | 99.999th=[ 6718] 00:13:17.529 write: IOPS=32.6k, BW=127MiB/s (134MB/s)(1275MiB/10002msec); 0 zone resets 00:13:17.529 slat (usec): min=3, max=4163, avg=24.04, stdev=78.03 00:13:17.529 clat (usec): min=59, max=6903, avg=680.88, stdev=616.44 00:13:17.529 lat (usec): min=90, max=6930, avg=704.92, stdev=628.92 00:13:17.529 clat percentiles (usec): 00:13:17.529 | 50.000th=[ 486], 99.000th=[ 3261], 99.900th=[ 4686], 99.990th=[ 5997], 00:13:17.529 | 99.999th=[ 6849] 00:13:17.529 bw ( KiB/s): min=54005, max=201376, per=100.00%, avg=134585.68, stdev=8114.86, samples=114 00:13:17.529 iops : min=13500, max=50344, avg=33645.58, stdev=2028.77, samples=114 00:13:17.529 lat (usec) : 100=0.08%, 250=14.88%, 500=42.80%, 750=21.70%, 1000=6.67% 00:13:17.529 lat (msec) : 2=9.67%, 4=3.99%, 10=0.23% 00:13:17.529 cpu : usr=49.60%, sys=30.05%, ctx=7819, majf=0, minf=26957 00:13:17.529 IO depths : 1=12.1%, 2=24.6%, 4=50.3%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.529 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.529 issued rwts: total=323149,326336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.529 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:17.529 00:13:17.529 Run status group 0 (all jobs): 00:13:17.529 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=1262MiB (1324MB), run=10002-10002msec 00:13:17.529 WRITE: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=1275MiB (1337MB), run=10002-10002msec 00:13:17.529 ----------------------------------------------------- 00:13:17.529 Suppressions used: 00:13:17.529 count bytes template 00:13:17.529 6 48 /usr/src/fio/parse.c 00:13:17.529 2951 283296 /usr/src/fio/iolog.c 00:13:17.529 1 8 libtcmalloc_minimal.so 00:13:17.529 1 904 libcrypto.so 00:13:17.529 ----------------------------------------------------- 00:13:17.529 00:13:17.529 00:13:17.529 real 0m11.824s 00:13:17.529 user 0m31.189s 00:13:17.529 sys 0m18.299s 00:13:17.529 09:19:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:17.529 09:19:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:17.529 ************************************ 00:13:17.529 END TEST bdev_fio_rw_verify 00:13:17.529 ************************************ 00:13:17.529 09:19:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1ee5e974-f45f-41d9-a8f1-bc19ba757c2b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1ee5e974-f45f-41d9-a8f1-bc19ba757c2b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "f093c5ed-a642-4df8-8b97-0ecc4e3ba48e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f093c5ed-a642-4df8-8b97-0ecc4e3ba48e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "dc2c4758-044b-41ae-a384-2d6b269d4edd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc2c4758-044b-41ae-a384-2d6b269d4edd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "5334c162-5f09-4bad-a01c-2fc6a72c9183"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5334c162-5f09-4bad-a01c-2fc6a72c9183",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "5158127c-3586-47b8-a2f3-0e519a4af242"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5158127c-3586-47b8-a2f3-0e519a4af242",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6505aafb-291e-456c-bac2-7c72caef8908"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6505aafb-291e-456c-bac2-7c72caef8908",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:17.530 /home/vagrant/spdk_repo/spdk 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:17.530 00:13:17.530 real 0m11.985s 00:13:17.530 user 0m31.263s 00:13:17.530 sys 0m18.371s 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:17.530 ************************************ 00:13:17.530 END TEST bdev_fio 00:13:17.530 ************************************ 00:13:17.530 09:19:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:17.530 09:19:08 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:17.530 09:19:08 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:17.530 09:19:08 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:17.530 09:19:08 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:17.530 09:19:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.530 ************************************ 00:13:17.530 START TEST bdev_verify 00:13:17.530 ************************************ 00:13:17.530 09:19:08 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:17.530 [2024-10-08 09:19:08.609164] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:17.530 [2024-10-08 09:19:08.609282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70549 ] 00:13:17.530 [2024-10-08 09:19:08.758435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:17.530 [2024-10-08 09:19:08.953068] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.530 [2024-10-08 09:19:08.953173] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.792 Running I/O for 5 seconds... 00:13:20.122 22688.00 IOPS, 88.62 MiB/s [2024-10-08T09:19:12.747Z] 23264.00 IOPS, 90.88 MiB/s [2024-10-08T09:19:13.690Z] 23104.00 IOPS, 90.25 MiB/s [2024-10-08T09:19:14.634Z] 23384.00 IOPS, 91.34 MiB/s [2024-10-08T09:19:14.634Z] 23264.00 IOPS, 90.88 MiB/s 00:13:22.951 Latency(us) 00:13:22.951 [2024-10-08T09:19:14.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.951 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.951 Verification LBA range: start 0x0 length 0xa0000 00:13:22.951 nvme0n1 : 5.06 1847.41 7.22 0.00 0.00 69146.38 10889.06 76626.71 00:13:22.951 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.951 Verification LBA range: start 0xa0000 length 0xa0000 00:13:22.951 nvme0n1 : 5.09 1812.05 7.08 0.00 0.00 70506.94 6452.78 74610.22 00:13:22.951 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.951 Verification LBA range: start 0x0 length 0xbd0bd 00:13:22.951 nvme1n1 : 5.06 2331.56 9.11 0.00 0.00 54683.23 8217.21 57671.68 00:13:22.951 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.951 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:22.951 nvme1n1 : 5.08 2274.49 8.88 0.00 0.00 56006.53 6200.71 67754.14 00:13:22.951 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.951 Verification LBA range: start 0x0 length 0x80000 00:13:22.951 nvme2n1 : 5.08 1865.45 7.29 0.00 0.00 68279.36 10737.82 67350.84 00:13:22.951 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.951 Verification LBA range: start 0x80000 length 0x80000 00:13:22.951 nvme2n1 : 5.08 1838.60 7.18 0.00 0.00 69159.93 11393.18 71383.83 00:13:22.951 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.951 Verification LBA range: start 0x0 length 0x80000 00:13:22.951 nvme2n2 : 5.07 1843.95 7.20 0.00 0.00 68756.72 14619.57 67350.84 00:13:22.951 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.952 Verification LBA range: start 0x80000 length 0x80000 00:13:22.952 nvme2n2 : 5.09 1810.90 7.07 0.00 0.00 70088.75 10183.29 76626.71 00:13:22.952 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.952 Verification LBA range: start 0x0 length 0x80000 00:13:22.952 nvme2n3 : 5.08 1864.71 7.28 0.00 0.00 67842.54 10384.94 74206.92 00:13:22.952 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.952 Verification LBA range: start 0x80000 length 0x80000 00:13:22.952 nvme2n3 : 5.07 1816.07 7.09 0.00 0.00 69723.29 7864.32 76223.41 00:13:22.952 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:22.952 Verification LBA range: start 0x0 length 0x20000 00:13:22.952 nvme3n1 : 5.09 1862.09 7.27 0.00 0.00 67824.70 4032.98 66140.95 00:13:22.952 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:22.952 Verification LBA range: start 0x20000 length 0x20000 00:13:22.952 nvme3n1 : 5.08 1812.72 7.08 0.00 0.00 69726.26 5444.53 83079.48 00:13:22.952 [2024-10-08T09:19:14.635Z] =================================================================================================================== 00:13:22.952 [2024-10-08T09:19:14.635Z] Total : 22979.99 89.77 0.00 0.00 66341.22 4032.98 83079.48 00:13:23.894 00:13:23.894 real 0m6.873s 00:13:23.894 user 0m11.188s 00:13:23.894 sys 0m1.278s 00:13:23.894 09:19:15 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.894 09:19:15 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:23.894 ************************************ 00:13:23.894 END TEST bdev_verify 00:13:23.894 ************************************ 00:13:23.894 09:19:15 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:23.894 09:19:15 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:23.894 09:19:15 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.894 09:19:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.894 ************************************ 00:13:23.894 START TEST bdev_verify_big_io 00:13:23.894 ************************************ 00:13:23.894 09:19:15 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:23.895 [2024-10-08 09:19:15.567572] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:23.895 [2024-10-08 09:19:15.567712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70642 ] 00:13:24.155 [2024-10-08 09:19:15.721040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:24.417 [2024-10-08 09:19:15.961113] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.417 [2024-10-08 09:19:15.961215] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.989 Running I/O for 5 seconds... 00:13:30.838 912.00 IOPS, 57.00 MiB/s [2024-10-08T09:19:22.781Z] 2600.00 IOPS, 162.50 MiB/s [2024-10-08T09:19:22.781Z] 3013.33 IOPS, 188.33 MiB/s 00:13:31.098 Latency(us) 00:13:31.098 [2024-10-08T09:19:22.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.098 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.098 Verification LBA range: start 0x0 length 0xa000 00:13:31.098 nvme0n1 : 5.88 108.83 6.80 0.00 0.00 1137033.85 179064.52 1884210.41 00:13:31.098 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.098 Verification LBA range: start 0xa000 length 0xa000 00:13:31.098 nvme0n1 : 6.06 84.47 5.28 0.00 0.00 1405465.21 198422.84 1335724.50 00:13:31.098 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.098 Verification LBA range: start 0x0 length 0xbd0b 00:13:31.098 nvme1n1 : 5.79 176.89 11.06 0.00 0.00 680508.85 80256.39 890483.00 00:13:31.098 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.098 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:31.098 nvme1n1 : 6.08 94.81 5.93 0.00 0.00 1251536.65 67754.14 2348810.24 00:13:31.098 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.098 Verification LBA range: start 0x0 length 0x8000 00:13:31.098 nvme2n1 : 5.88 152.27 9.52 0.00 0.00 754643.44 145994.04 871124.68 00:13:31.098 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.098 Verification LBA range: start 0x8000 length 0x8000 00:13:31.098 nvme2n1 : 6.07 116.06 7.25 0.00 0.00 995402.47 172611.74 1071160.71 00:13:31.098 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.098 Verification LBA range: start 0x0 length 0x8000 00:13:31.098 nvme2n2 : 5.89 127.64 7.98 0.00 0.00 874073.26 98808.12 2271376.94 00:13:31.098 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.098 Verification LBA range: start 0x8000 length 0x8000 00:13:31.098 nvme2n2 : 6.08 136.88 8.56 0.00 0.00 822050.72 75013.51 1445421.69 00:13:31.098 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.098 Verification LBA range: start 0x0 length 0x8000 00:13:31.098 nvme2n3 : 6.09 126.19 7.89 0.00 0.00 861145.67 45371.08 1677721.60 00:13:31.098 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.098 Verification LBA range: start 0x8000 length 0x8000 00:13:31.098 nvme2n3 : 6.08 173.65 10.85 0.00 0.00 623528.64 68560.74 838860.80 00:13:31.098 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.098 Verification LBA range: start 0x0 length 0x2000 00:13:31.098 nvme3n1 : 6.09 110.26 6.89 0.00 0.00 954046.32 4537.11 2826315.62 00:13:31.098 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.098 Verification LBA range: start 0x2000 length 0x2000 00:13:31.098 nvme3n1 : 6.09 141.90 8.87 0.00 0.00 739109.37 13611.32 1606741.07 00:13:31.098 [2024-10-08T09:19:22.781Z] =================================================================================================================== 00:13:31.098 [2024-10-08T09:19:22.781Z] Total : 1549.86 96.87 0.00 0.00 880240.42 4537.11 2826315.62 00:13:32.040 00:13:32.040 real 0m8.220s 00:13:32.040 user 0m14.795s 00:13:32.040 sys 0m0.513s 00:13:32.040 09:19:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.040 09:19:23 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.040 ************************************ 00:13:32.040 END TEST bdev_verify_big_io 00:13:32.040 ************************************ 00:13:32.300 09:19:23 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.300 09:19:23 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:32.300 09:19:23 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.300 09:19:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.300 ************************************ 00:13:32.300 START TEST bdev_write_zeroes 00:13:32.300 ************************************ 00:13:32.300 09:19:23 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.300 [2024-10-08 09:19:23.853023] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:32.300 [2024-10-08 09:19:23.853169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70762 ] 00:13:32.561 [2024-10-08 09:19:24.008190] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.821 [2024-10-08 09:19:24.254857] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.085 Running I/O for 1 seconds... 00:13:34.059 82016.00 IOPS, 320.38 MiB/s 00:13:34.059 Latency(us) 00:13:34.059 [2024-10-08T09:19:25.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.059 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.059 nvme0n1 : 1.02 13172.83 51.46 0.00 0.00 9705.79 6024.27 28230.89 00:13:34.059 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.059 nvme1n1 : 1.03 15213.30 59.43 0.00 0.00 8366.26 5192.47 23088.84 00:13:34.059 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.059 nvme2n1 : 1.02 13152.46 51.38 0.00 0.00 9639.82 4587.52 21475.64 00:13:34.059 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.059 nvme2n2 : 1.02 13136.35 51.31 0.00 0.00 9643.58 4587.52 20769.87 00:13:34.059 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.059 nvme2n3 : 1.02 13121.51 51.26 0.00 0.00 9647.55 4587.52 20467.40 00:13:34.059 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:34.059 nvme3n1 : 1.03 13106.21 51.20 0.00 0.00 9649.38 4385.87 20568.22 00:13:34.059 [2024-10-08T09:19:25.742Z] =================================================================================================================== 00:13:34.059 [2024-10-08T09:19:25.742Z] Total : 80902.67 316.03 0.00 0.00 9413.80 4385.87 28230.89 00:13:35.002 00:13:35.002 real 0m2.826s 00:13:35.002 user 0m2.104s 00:13:35.002 sys 0m0.536s 00:13:35.002 09:19:26 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:35.002 09:19:26 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:35.002 ************************************ 00:13:35.002 END TEST bdev_write_zeroes 00:13:35.002 ************************************ 00:13:35.002 09:19:26 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:35.002 09:19:26 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:35.002 09:19:26 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:35.002 09:19:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.002 ************************************ 00:13:35.002 START TEST bdev_json_nonenclosed 00:13:35.002 ************************************ 00:13:35.002 09:19:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:35.263 [2024-10-08 09:19:26.752028] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:35.263 [2024-10-08 09:19:26.752367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70807 ] 00:13:35.263 [2024-10-08 09:19:26.906165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.524 [2024-10-08 09:19:27.155926] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.524 [2024-10-08 09:19:27.156214] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:35.524 [2024-10-08 09:19:27.156334] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:35.524 [2024-10-08 09:19:27.156361] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:36.098 00:13:36.098 real 0m0.795s 00:13:36.098 user 0m0.562s 00:13:36.098 sys 0m0.124s 00:13:36.098 ************************************ 00:13:36.098 END TEST bdev_json_nonenclosed 00:13:36.098 ************************************ 00:13:36.098 09:19:27 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.098 09:19:27 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 09:19:27 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.098 09:19:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:36.098 09:19:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.098 09:19:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 ************************************ 00:13:36.098 START TEST bdev_json_nonarray 00:13:36.098 ************************************ 00:13:36.098 09:19:27 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.098 [2024-10-08 09:19:27.607753] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:13:36.098 [2024-10-08 09:19:27.607903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70838 ] 00:13:36.098 [2024-10-08 09:19:27.753991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.358 [2024-10-08 09:19:27.993868] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.358 [2024-10-08 09:19:27.993988] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:36.358 [2024-10-08 09:19:27.994009] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:36.358 [2024-10-08 09:19:27.994019] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:36.927 00:13:36.927 real 0m0.778s 00:13:36.927 user 0m0.540s 00:13:36.927 sys 0m0.131s 00:13:36.927 ************************************ 00:13:36.927 END TEST bdev_json_nonarray 00:13:36.927 ************************************ 00:13:36.927 09:19:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.927 09:19:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:36.927 09:19:28 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:36.927 09:19:28 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:36.927 09:19:28 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:36.927 09:19:28 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:36.927 09:19:28 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:36.927 09:19:28 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:36.927 09:19:28 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:36.927 09:19:28 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:36.928 09:19:28 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:36.928 09:19:28 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:36.928 09:19:28 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:36.928 09:19:28 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:37.187 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:59.196 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.196 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:07.328 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:07.328 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:07.328 00:14:07.328 real 1m23.528s 00:14:07.328 user 1m29.988s 00:14:07.328 sys 1m36.223s 00:14:07.328 09:19:58 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.328 09:19:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.328 ************************************ 00:14:07.328 END TEST blockdev_xnvme 00:14:07.328 ************************************ 00:14:07.328 09:19:58 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:07.328 09:19:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:07.328 09:19:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.328 09:19:58 -- common/autotest_common.sh@10 -- # set +x 00:14:07.328 ************************************ 00:14:07.328 START TEST ublk 00:14:07.328 ************************************ 00:14:07.328 09:19:58 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:07.328 * Looking for test storage... 00:14:07.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:07.328 09:19:58 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:07.328 09:19:58 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:07.328 09:19:58 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:14:07.328 09:19:58 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:07.328 09:19:58 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.328 09:19:58 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.328 09:19:58 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.328 09:19:58 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.328 09:19:58 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.328 09:19:58 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.328 09:19:58 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.328 09:19:58 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.328 09:19:58 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.328 09:19:58 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.328 09:19:58 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.328 09:19:58 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:07.328 09:19:58 ublk -- scripts/common.sh@345 -- # : 1 00:14:07.328 09:19:58 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.328 09:19:58 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.328 09:19:58 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:07.328 09:19:58 ublk -- scripts/common.sh@353 -- # local d=1 00:14:07.328 09:19:58 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.328 09:19:58 ublk -- scripts/common.sh@355 -- # echo 1 00:14:07.328 09:19:58 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.328 09:19:58 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:07.328 09:19:58 ublk -- scripts/common.sh@353 -- # local d=2 00:14:07.328 09:19:58 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.328 09:19:58 ublk -- scripts/common.sh@355 -- # echo 2 00:14:07.328 09:19:58 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.328 09:19:58 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.328 09:19:58 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.328 09:19:58 ublk -- scripts/common.sh@368 -- # return 0 00:14:07.328 09:19:58 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.328 09:19:58 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:07.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.328 --rc genhtml_branch_coverage=1 00:14:07.328 --rc genhtml_function_coverage=1 00:14:07.328 --rc genhtml_legend=1 00:14:07.328 --rc geninfo_all_blocks=1 00:14:07.328 --rc geninfo_unexecuted_blocks=1 00:14:07.328 00:14:07.328 ' 00:14:07.328 09:19:58 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:07.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.328 --rc genhtml_branch_coverage=1 00:14:07.328 --rc genhtml_function_coverage=1 00:14:07.328 --rc genhtml_legend=1 00:14:07.328 --rc geninfo_all_blocks=1 00:14:07.328 --rc geninfo_unexecuted_blocks=1 00:14:07.328 00:14:07.328 ' 00:14:07.328 09:19:58 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:07.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.328 --rc genhtml_branch_coverage=1 00:14:07.328 --rc genhtml_function_coverage=1 00:14:07.328 --rc genhtml_legend=1 00:14:07.328 --rc geninfo_all_blocks=1 00:14:07.328 --rc geninfo_unexecuted_blocks=1 00:14:07.328 00:14:07.328 ' 00:14:07.328 09:19:58 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:07.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.328 --rc genhtml_branch_coverage=1 00:14:07.328 --rc genhtml_function_coverage=1 00:14:07.328 --rc genhtml_legend=1 00:14:07.328 --rc geninfo_all_blocks=1 00:14:07.328 --rc geninfo_unexecuted_blocks=1 00:14:07.328 00:14:07.328 ' 00:14:07.328 09:19:58 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:07.328 09:19:58 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:07.328 09:19:58 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:07.328 09:19:58 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:07.329 09:19:58 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:07.329 09:19:58 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:07.329 09:19:58 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:07.329 09:19:58 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:07.329 09:19:58 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:07.329 09:19:58 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:07.329 09:19:58 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:07.329 09:19:58 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:07.329 09:19:58 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:07.329 09:19:58 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:07.329 09:19:58 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:07.329 09:19:58 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:07.329 09:19:58 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:07.329 09:19:58 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:07.329 09:19:58 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:07.329 09:19:58 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:07.329 09:19:58 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:07.329 09:19:58 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.329 09:19:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.329 ************************************ 00:14:07.329 START TEST test_save_ublk_config 00:14:07.329 ************************************ 00:14:07.329 09:19:58 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:14:07.329 09:19:58 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:07.329 09:19:58 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=71159 00:14:07.329 09:19:58 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:07.329 09:19:58 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 71159 00:14:07.329 09:19:58 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:07.329 09:19:58 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71159 ']' 00:14:07.329 09:19:58 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.329 09:19:58 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.329 09:19:58 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.329 09:19:58 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.329 09:19:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:07.329 [2024-10-08 09:19:58.882014] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:07.329 [2024-10-08 09:19:58.882282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71159 ] 00:14:07.590 [2024-10-08 09:19:59.032707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.590 [2024-10-08 09:19:59.229093] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.162 09:19:59 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.162 09:19:59 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:08.162 09:19:59 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:08.162 09:19:59 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:08.162 09:19:59 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.162 09:19:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:08.162 [2024-10-08 09:19:59.839411] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:08.162 [2024-10-08 09:19:59.840195] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:08.423 malloc0 00:14:08.423 [2024-10-08 09:19:59.895845] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:08.423 [2024-10-08 09:19:59.895921] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:08.423 [2024-10-08 09:19:59.895930] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:08.423 [2024-10-08 09:19:59.895939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.423 [2024-10-08 09:19:59.904487] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.423 [2024-10-08 09:19:59.904509] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.423 [2024-10-08 09:19:59.911414] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.423 [2024-10-08 09:19:59.911513] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:08.423 [2024-10-08 09:19:59.928418] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.423 0 00:14:08.423 09:19:59 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.423 09:19:59 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:08.423 09:19:59 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.423 09:19:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:08.685 09:20:00 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.685 09:20:00 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:08.685 "subsystems": [ 00:14:08.685 { 00:14:08.685 "subsystem": "fsdev", 00:14:08.685 "config": [ 00:14:08.685 { 00:14:08.685 "method": "fsdev_set_opts", 00:14:08.685 "params": { 00:14:08.685 "fsdev_io_pool_size": 65535, 00:14:08.685 "fsdev_io_cache_size": 256 00:14:08.685 } 00:14:08.685 } 00:14:08.685 ] 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "subsystem": "keyring", 00:14:08.685 "config": [] 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "subsystem": "iobuf", 00:14:08.685 "config": [ 00:14:08.685 { 00:14:08.685 "method": "iobuf_set_options", 00:14:08.685 "params": { 00:14:08.685 "small_pool_count": 8192, 00:14:08.685 "large_pool_count": 1024, 00:14:08.685 "small_bufsize": 8192, 00:14:08.685 "large_bufsize": 135168 00:14:08.685 } 00:14:08.685 } 00:14:08.685 ] 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "subsystem": "sock", 00:14:08.685 "config": [ 00:14:08.685 { 00:14:08.685 "method": "sock_set_default_impl", 00:14:08.685 "params": { 00:14:08.685 "impl_name": "posix" 00:14:08.685 } 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "method": "sock_impl_set_options", 00:14:08.685 "params": { 00:14:08.685 "impl_name": "ssl", 00:14:08.685 "recv_buf_size": 4096, 00:14:08.685 "send_buf_size": 4096, 00:14:08.685 "enable_recv_pipe": true, 00:14:08.685 "enable_quickack": false, 00:14:08.685 "enable_placement_id": 0, 00:14:08.685 "enable_zerocopy_send_server": true, 00:14:08.685 "enable_zerocopy_send_client": false, 00:14:08.685 "zerocopy_threshold": 0, 00:14:08.685 "tls_version": 0, 00:14:08.685 "enable_ktls": false 00:14:08.685 } 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "method": "sock_impl_set_options", 00:14:08.685 "params": { 00:14:08.685 "impl_name": "posix", 00:14:08.685 "recv_buf_size": 2097152, 00:14:08.685 "send_buf_size": 2097152, 00:14:08.685 "enable_recv_pipe": true, 00:14:08.685 "enable_quickack": false, 00:14:08.685 "enable_placement_id": 0, 00:14:08.685 "enable_zerocopy_send_server": true, 00:14:08.685 "enable_zerocopy_send_client": false, 00:14:08.685 "zerocopy_threshold": 0, 00:14:08.685 "tls_version": 0, 00:14:08.685 "enable_ktls": false 00:14:08.685 } 00:14:08.685 } 00:14:08.685 ] 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "subsystem": "vmd", 00:14:08.685 "config": [] 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "subsystem": "accel", 00:14:08.685 "config": [ 00:14:08.685 { 00:14:08.685 "method": "accel_set_options", 00:14:08.685 "params": { 00:14:08.685 "small_cache_size": 128, 00:14:08.685 "large_cache_size": 16, 00:14:08.685 "task_count": 2048, 00:14:08.685 "sequence_count": 2048, 00:14:08.685 "buf_count": 2048 00:14:08.685 } 00:14:08.685 } 00:14:08.685 ] 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "subsystem": "bdev", 00:14:08.685 "config": [ 00:14:08.685 { 00:14:08.685 "method": "bdev_set_options", 00:14:08.685 "params": { 00:14:08.685 "bdev_io_pool_size": 65535, 00:14:08.685 "bdev_io_cache_size": 256, 00:14:08.685 "bdev_auto_examine": true, 00:14:08.685 "iobuf_small_cache_size": 128, 00:14:08.685 "iobuf_large_cache_size": 16 00:14:08.685 } 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "method": "bdev_raid_set_options", 00:14:08.685 "params": { 00:14:08.685 "process_window_size_kb": 1024, 00:14:08.685 "process_max_bandwidth_mb_sec": 0 00:14:08.685 } 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "method": "bdev_iscsi_set_options", 00:14:08.685 "params": { 00:14:08.685 "timeout_sec": 30 00:14:08.685 } 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "method": "bdev_nvme_set_options", 00:14:08.685 "params": { 00:14:08.685 "action_on_timeout": "none", 00:14:08.685 "timeout_us": 0, 00:14:08.685 "timeout_admin_us": 0, 00:14:08.685 "keep_alive_timeout_ms": 10000, 00:14:08.685 "arbitration_burst": 0, 00:14:08.685 "low_priority_weight": 0, 00:14:08.685 "medium_priority_weight": 0, 00:14:08.685 "high_priority_weight": 0, 00:14:08.685 "nvme_adminq_poll_period_us": 10000, 00:14:08.685 "nvme_ioq_poll_period_us": 0, 00:14:08.685 "io_queue_requests": 0, 00:14:08.685 "delay_cmd_submit": true, 00:14:08.685 "transport_retry_count": 4, 00:14:08.685 "bdev_retry_count": 3, 00:14:08.685 "transport_ack_timeout": 0, 00:14:08.685 "ctrlr_loss_timeout_sec": 0, 00:14:08.685 "reconnect_delay_sec": 0, 00:14:08.685 "fast_io_fail_timeout_sec": 0, 00:14:08.685 "disable_auto_failback": false, 00:14:08.685 "generate_uuids": false, 00:14:08.685 "transport_tos": 0, 00:14:08.685 "nvme_error_stat": false, 00:14:08.685 "rdma_srq_size": 0, 00:14:08.685 "io_path_stat": false, 00:14:08.685 "allow_accel_sequence": false, 00:14:08.685 "rdma_max_cq_size": 0, 00:14:08.685 "rdma_cm_event_timeout_ms": 0, 00:14:08.685 "dhchap_digests": [ 00:14:08.685 "sha256", 00:14:08.685 "sha384", 00:14:08.685 "sha512" 00:14:08.685 ], 00:14:08.685 "dhchap_dhgroups": [ 00:14:08.685 "null", 00:14:08.685 "ffdhe2048", 00:14:08.685 "ffdhe3072", 00:14:08.685 "ffdhe4096", 00:14:08.685 "ffdhe6144", 00:14:08.685 "ffdhe8192" 00:14:08.685 ] 00:14:08.685 } 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "method": "bdev_nvme_set_hotplug", 00:14:08.685 "params": { 00:14:08.685 "period_us": 100000, 00:14:08.685 "enable": false 00:14:08.685 } 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "method": "bdev_malloc_create", 00:14:08.685 "params": { 00:14:08.685 "name": "malloc0", 00:14:08.685 "num_blocks": 8192, 00:14:08.685 "block_size": 4096, 00:14:08.685 "physical_block_size": 4096, 00:14:08.685 "uuid": "d9f36147-8702-44df-a265-b92ed9ddce13", 00:14:08.685 "optimal_io_boundary": 0, 00:14:08.685 "md_size": 0, 00:14:08.685 "dif_type": 0, 00:14:08.685 "dif_is_head_of_md": false, 00:14:08.685 "dif_pi_format": 0 00:14:08.685 } 00:14:08.685 }, 00:14:08.685 { 00:14:08.685 "method": "bdev_wait_for_examine" 00:14:08.685 } 00:14:08.685 ] 00:14:08.685 }, 00:14:08.685 { 00:14:08.686 "subsystem": "scsi", 00:14:08.686 "config": null 00:14:08.686 }, 00:14:08.686 { 00:14:08.686 "subsystem": "scheduler", 00:14:08.686 "config": [ 00:14:08.686 { 00:14:08.686 "method": "framework_set_scheduler", 00:14:08.686 "params": { 00:14:08.686 "name": "static" 00:14:08.686 } 00:14:08.686 } 00:14:08.686 ] 00:14:08.686 }, 00:14:08.686 { 00:14:08.686 "subsystem": "vhost_scsi", 00:14:08.686 "config": [] 00:14:08.686 }, 00:14:08.686 { 00:14:08.686 "subsystem": "vhost_blk", 00:14:08.686 "config": [] 00:14:08.686 }, 00:14:08.686 { 00:14:08.686 "subsystem": "ublk", 00:14:08.686 "config": [ 00:14:08.686 { 00:14:08.686 "method": "ublk_create_target", 00:14:08.686 "params": { 00:14:08.686 "cpumask": "1" 00:14:08.686 } 00:14:08.686 }, 00:14:08.686 { 00:14:08.686 "method": "ublk_start_disk", 00:14:08.686 "params": { 00:14:08.686 "bdev_name": "malloc0", 00:14:08.686 "ublk_id": 0, 00:14:08.686 "num_queues": 1, 00:14:08.686 "queue_depth": 128 00:14:08.686 } 00:14:08.686 } 00:14:08.686 ] 00:14:08.686 }, 00:14:08.686 { 00:14:08.686 "subsystem": "nbd", 00:14:08.686 "config": [] 00:14:08.686 }, 00:14:08.686 { 00:14:08.686 "subsystem": "nvmf", 00:14:08.686 "config": [ 00:14:08.686 { 00:14:08.686 "method": "nvmf_set_config", 00:14:08.686 "params": { 00:14:08.686 "discovery_filter": "match_any", 00:14:08.686 "admin_cmd_passthru": { 00:14:08.686 "identify_ctrlr": false 00:14:08.686 }, 00:14:08.686 "dhchap_digests": [ 00:14:08.686 "sha256", 00:14:08.686 "sha384", 00:14:08.686 "sha512" 00:14:08.686 ], 00:14:08.686 "dhchap_dhgroups": [ 00:14:08.686 "null", 00:14:08.686 "ffdhe2048", 00:14:08.686 "ffdhe3072", 00:14:08.686 "ffdhe4096", 00:14:08.686 "ffdhe6144", 00:14:08.686 "ffdhe8192" 00:14:08.686 ] 00:14:08.686 } 00:14:08.686 }, 00:14:08.686 { 00:14:08.686 "method": "nvmf_set_max_subsystems", 00:14:08.686 "params": { 00:14:08.686 "max_subsystems": 1024 00:14:08.686 } 00:14:08.686 }, 00:14:08.686 { 00:14:08.686 "method": "nvmf_set_crdt", 00:14:08.686 "params": { 00:14:08.686 "crdt1": 0, 00:14:08.686 "crdt2": 0, 00:14:08.686 "crdt3": 0 00:14:08.686 } 00:14:08.686 } 00:14:08.686 ] 00:14:08.686 }, 00:14:08.686 { 00:14:08.686 "subsystem": "iscsi", 00:14:08.686 "config": [ 00:14:08.686 { 00:14:08.686 "method": "iscsi_set_options", 00:14:08.686 "params": { 00:14:08.686 "node_base": "iqn.2016-06.io.spdk", 00:14:08.686 "max_sessions": 128, 00:14:08.686 "max_connections_per_session": 2, 00:14:08.686 "max_queue_depth": 64, 00:14:08.686 "default_time2wait": 2, 00:14:08.686 "default_time2retain": 20, 00:14:08.686 "first_burst_length": 8192, 00:14:08.686 "immediate_data": true, 00:14:08.686 "allow_duplicated_isid": false, 00:14:08.686 "error_recovery_level": 0, 00:14:08.686 "nop_timeout": 60, 00:14:08.686 "nop_in_interval": 30, 00:14:08.686 "disable_chap": false, 00:14:08.686 "require_chap": false, 00:14:08.686 "mutual_chap": false, 00:14:08.686 "chap_group": 0, 00:14:08.686 "max_large_datain_per_connection": 64, 00:14:08.686 "max_r2t_per_connection": 4, 00:14:08.686 "pdu_pool_size": 36864, 00:14:08.686 "immediate_data_pool_size": 16384, 00:14:08.686 "data_out_pool_size": 2048 00:14:08.686 } 00:14:08.686 } 00:14:08.686 ] 00:14:08.686 } 00:14:08.686 ] 00:14:08.686 }' 00:14:08.686 09:20:00 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 71159 00:14:08.686 09:20:00 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71159 ']' 00:14:08.686 09:20:00 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71159 00:14:08.686 09:20:00 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:08.686 09:20:00 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.686 09:20:00 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71159 00:14:08.686 killing process with pid 71159 00:14:08.686 09:20:00 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.686 09:20:00 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.686 09:20:00 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71159' 00:14:08.686 09:20:00 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71159 00:14:08.686 09:20:00 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71159 00:14:09.629 [2024-10-08 09:20:01.275587] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:09.629 [2024-10-08 09:20:01.307437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:09.629 [2024-10-08 09:20:01.307583] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:10.015 [2024-10-08 09:20:01.319410] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:10.015 [2024-10-08 09:20:01.319463] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:10.015 [2024-10-08 09:20:01.319473] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:10.015 [2024-10-08 09:20:01.319508] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:10.015 [2024-10-08 09:20:01.319653] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:11.428 09:20:02 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=71214 00:14:11.428 09:20:02 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 71214 00:14:11.428 09:20:02 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71214 ']' 00:14:11.428 09:20:02 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:11.428 09:20:02 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.428 09:20:02 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:11.428 "subsystems": [ 00:14:11.428 { 00:14:11.428 "subsystem": "fsdev", 00:14:11.428 "config": [ 00:14:11.428 { 00:14:11.428 "method": "fsdev_set_opts", 00:14:11.428 "params": { 00:14:11.428 "fsdev_io_pool_size": 65535, 00:14:11.428 "fsdev_io_cache_size": 256 00:14:11.428 } 00:14:11.428 } 00:14:11.428 ] 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "subsystem": "keyring", 00:14:11.428 "config": [] 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "subsystem": "iobuf", 00:14:11.428 "config": [ 00:14:11.428 { 00:14:11.428 "method": "iobuf_set_options", 00:14:11.428 "params": { 00:14:11.428 "small_pool_count": 8192, 00:14:11.428 "large_pool_count": 1024, 00:14:11.428 "small_bufsize": 8192, 00:14:11.428 "large_bufsize": 135168 00:14:11.428 } 00:14:11.428 } 00:14:11.428 ] 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "subsystem": "sock", 00:14:11.428 "config": [ 00:14:11.428 { 00:14:11.428 "method": "sock_set_default_impl", 00:14:11.428 "params": { 00:14:11.428 "impl_name": "posix" 00:14:11.428 } 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "method": "sock_impl_set_options", 00:14:11.428 "params": { 00:14:11.428 "impl_name": "ssl", 00:14:11.428 "recv_buf_size": 4096, 00:14:11.428 "send_buf_size": 4096, 00:14:11.428 "enable_recv_pipe": true, 00:14:11.428 "enable_quickack": false, 00:14:11.428 "enable_placement_id": 0, 00:14:11.428 "enable_zerocopy_send_server": true, 00:14:11.428 "enable_zerocopy_send_client": false, 00:14:11.428 "zerocopy_threshold": 0, 00:14:11.428 "tls_version": 0, 00:14:11.428 "enable_ktls": false 00:14:11.428 } 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "method": "sock_impl_set_options", 00:14:11.428 "params": { 00:14:11.428 "impl_name": "posix", 00:14:11.428 "recv_buf_size": 2097152, 00:14:11.428 "send_buf_size": 2097152, 00:14:11.428 "enable_recv_pipe": true, 00:14:11.428 "enable_quickack": false, 00:14:11.428 "enable_placement_id": 0, 00:14:11.428 "enable_zerocopy_send_server": true, 00:14:11.428 "enable_zerocopy_send_client": false, 00:14:11.428 "zerocopy_threshold": 0, 00:14:11.428 "tls_version": 0, 00:14:11.428 "enable_ktls": false 00:14:11.428 } 00:14:11.428 } 00:14:11.428 ] 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "subsystem": "vmd", 00:14:11.428 "config": [] 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "subsystem": "accel", 00:14:11.428 "config": [ 00:14:11.428 { 00:14:11.428 "method": "accel_set_options", 00:14:11.428 "params": { 00:14:11.428 "small_cache_size": 128, 00:14:11.428 "large_cache_size": 16, 00:14:11.428 "task_count": 2048, 00:14:11.428 "sequence_count": 2048, 00:14:11.428 "buf_count": 2048 00:14:11.428 } 00:14:11.428 } 00:14:11.428 ] 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "subsystem": "bdev", 00:14:11.428 "config": [ 00:14:11.428 { 00:14:11.428 "method": "bdev_set_options", 00:14:11.428 "params": { 00:14:11.428 "bdev_io_pool_size": 65535, 00:14:11.428 "bdev_io_cache_size": 256, 00:14:11.428 "bdev_auto_examine": true, 00:14:11.428 "iobuf_small_cache_size": 128, 00:14:11.428 "iobuf_large_cache_size": 16 00:14:11.428 } 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "method": "bdev_raid_set_options", 00:14:11.428 "params": { 00:14:11.428 "process_window_size_kb": 1024, 00:14:11.428 "process_max_bandwidth_mb_sec": 0 00:14:11.428 } 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "method": "bdev_iscsi_set_options", 00:14:11.428 "params": { 00:14:11.428 "timeout_sec": 30 00:14:11.428 } 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "method": "bdev_nvme_set_options", 00:14:11.428 "params": { 00:14:11.428 "action_on_timeout": "none", 00:14:11.428 "timeout_us": 0, 00:14:11.428 "timeout_admin_us": 0, 00:14:11.428 "keep_alive_timeout_ms": 10000, 00:14:11.428 "arbitration_burst": 0, 00:14:11.428 "low_priority_weight": 0, 00:14:11.428 "medium_priority_weight": 0, 00:14:11.428 "high_priority_weight": 0, 00:14:11.428 "nvme_adminq_poll_period_us": 10000, 00:14:11.428 "nvme_ioq_poll_period_us": 0, 00:14:11.428 "io_queue_requests": 0, 00:14:11.428 "delay_cmd_submit": true, 00:14:11.428 "transport_retry_count": 4, 00:14:11.428 "bdev_retry_count": 3, 00:14:11.428 "transport_ack_timeout": 0, 00:14:11.428 "ctrlr_loss_timeout_sec": 0, 00:14:11.428 "reconnect_delay_sec": 0, 00:14:11.428 "fast_io_fail_timeout_sec": 0, 00:14:11.428 "disable_auto_failback": false, 00:14:11.428 "generate_uuids": false, 00:14:11.428 "transport_tos": 0, 00:14:11.428 "nvme_error_stat": false, 00:14:11.428 "rdma_srq_size": 0, 00:14:11.428 "io_path_stat": false, 00:14:11.428 "allow_accel_sequence": false, 00:14:11.428 "rdma_max_cq_size": 0, 00:14:11.428 "rdma_cm_event_timeout_ms": 0, 00:14:11.428 "dhchap_digests": [ 00:14:11.428 "sha256", 00:14:11.428 "sha384", 00:14:11.428 "sha512" 00:14:11.428 ], 00:14:11.428 "dhchap_dhgroups": [ 00:14:11.428 "null", 00:14:11.428 "ffdhe2048", 00:14:11.428 "ffdhe3072", 00:14:11.428 "ffdhe4096", 00:14:11.428 "ffdhe6144", 00:14:11.428 "ffdhe8192" 00:14:11.428 ] 00:14:11.428 } 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "method": "bdev_nvme_set_hotplug", 00:14:11.428 "params": { 00:14:11.428 "period_us": 100000, 00:14:11.428 "enable": false 00:14:11.428 } 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "method": "bdev_malloc_create", 00:14:11.428 "params": { 00:14:11.428 "name": "malloc0", 00:14:11.428 "num_blocks": 8192, 00:14:11.428 "block_size": 4096, 00:14:11.428 "physical_block_size": 4096, 00:14:11.428 "uuid": "d9f36147-8702-44df-a265-b92ed9ddce13", 00:14:11.428 "optimal_io_boundary": 0, 00:14:11.428 "md_size": 0, 00:14:11.428 "dif_type": 0, 00:14:11.428 "dif_is_head_of_md": false, 00:14:11.428 "dif_pi_format": 0 00:14:11.428 } 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "method": "bdev_wait_for_examine" 00:14:11.428 } 00:14:11.428 ] 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "subsystem": "scsi", 00:14:11.428 "config": null 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "subsystem": "scheduler", 00:14:11.428 "config": [ 00:14:11.428 { 00:14:11.428 "method": "framework_set_scheduler", 00:14:11.428 "params": { 00:14:11.428 "name": "static" 00:14:11.428 } 00:14:11.428 } 00:14:11.428 ] 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "subsystem": "vhost_scsi", 00:14:11.428 "config": [] 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "subsystem": "vhost_blk", 00:14:11.428 "config": [] 00:14:11.428 }, 00:14:11.428 { 00:14:11.428 "subsystem": "ublk", 00:14:11.429 "config": [ 00:14:11.429 { 00:14:11.429 "method": "ublk_create_target", 00:14:11.429 "params": { 00:14:11.429 "cpumask": "1" 00:14:11.429 } 00:14:11.429 }, 00:14:11.429 { 00:14:11.429 "method": "ublk_start_disk", 00:14:11.429 "params": { 00:14:11.429 "bdev_name": "malloc0", 00:14:11.429 "ublk_id": 0, 00:14:11.429 "num_queues": 1, 00:14:11.429 "queue_depth": 128 00:14:11.429 } 00:14:11.429 } 00:14:11.429 ] 00:14:11.429 }, 00:14:11.429 { 00:14:11.429 "subsystem": "nbd", 00:14:11.429 "config": [] 00:14:11.429 }, 00:14:11.429 { 00:14:11.429 "subsystem": "nvmf", 00:14:11.429 "config": [ 00:14:11.429 { 00:14:11.429 "method": "nvmf_set_config", 00:14:11.429 "params": { 00:14:11.429 "discovery_filter": "match_any", 00:14:11.429 "admin_cmd_passthru": { 00:14:11.429 "identify_ctrlr": false 00:14:11.429 }, 00:14:11.429 "dhchap_digests": [ 00:14:11.429 "sha256", 00:14:11.429 "sha384", 00:14:11.429 "sha512" 00:14:11.429 ], 00:14:11.429 "dhchap_dhgroups": [ 00:14:11.429 "null", 00:14:11.429 "ffdhe2048", 00:14:11.429 "ffdhe3072", 00:14:11.429 "ffdhe4096", 00:14:11.429 "ffdhe6144", 00:14:11.429 "ffdhe8192" 00:14:11.429 ] 00:14:11.429 } 00:14:11.429 }, 00:14:11.429 { 00:14:11.429 "method": "nvmf_set_max_subsystems", 00:14:11.429 "params": { 00:14:11.429 "max_subsystems": 1024 00:14:11.429 } 00:14:11.429 }, 00:14:11.429 { 00:14:11.429 "method": "nvmf_set_crdt", 00:14:11.429 "params": { 00:14:11.429 "crdt1": 0, 00:14:11.429 "crdt2": 0, 00:14:11.429 "crdt3": 0 00:14:11.429 } 00:14:11.429 } 00:14:11.429 ] 00:14:11.429 }, 00:14:11.429 { 00:14:11.429 "subsystem": "iscsi", 00:14:11.429 "config": [ 00:14:11.429 { 00:14:11.429 "method": "iscsi_set_options", 00:14:11.429 "params": { 00:14:11.429 "node_base": "iqn.2016-06.io.spdk", 00:14:11.429 "max_sessions": 128, 00:14:11.429 "max_connections_per_session": 2, 00:14:11.429 "max_queue_depth": 64, 00:14:11.429 "default_time2wait": 2, 00:14:11.429 "default_time2retain": 20, 00:14:11.429 "first_burst_length": 8192, 00:14:11.429 "immediate_data": true, 00:14:11.429 "allow_duplicated_isid": false, 00:14:11.429 "error_recovery_level": 0, 00:14:11.429 "nop_timeout": 60, 00:14:11.429 "nop_in_interval": 30, 00:14:11.429 "disable_chap": false, 00:14:11.429 "require_chap": false, 00:14:11.429 "mutual_chap": false, 00:14:11.429 "chap_group": 0, 00:14:11.429 "max_large_datain_per_connection": 64, 00:14:11.429 "max_r2t_per_connection": 4, 00:14:11.429 "pdu_pool_size": 36864, 00:14:11.429 "immediate_data_pool_size": 16384, 00:14:11.429 "data_out_pool_size": 2048 00:14:11.429 } 00:14:11.429 } 00:14:11.429 ] 00:14:11.429 } 00:14:11.429 ] 00:14:11.429 }' 00:14:11.429 09:20:02 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.429 09:20:02 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.429 09:20:02 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.429 09:20:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:11.429 [2024-10-08 09:20:02.767086] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:11.429 [2024-10-08 09:20:02.767542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71214 ] 00:14:11.429 [2024-10-08 09:20:02.917446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.429 [2024-10-08 09:20:03.077634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.371 [2024-10-08 09:20:03.706403] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:12.371 [2024-10-08 09:20:03.707032] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:12.371 [2024-10-08 09:20:03.714496] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:12.371 [2024-10-08 09:20:03.714555] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:12.371 [2024-10-08 09:20:03.714561] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:12.371 [2024-10-08 09:20:03.714566] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:12.371 [2024-10-08 09:20:03.723455] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:12.371 [2024-10-08 09:20:03.723471] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:12.371 [2024-10-08 09:20:03.730410] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:12.371 [2024-10-08 09:20:03.730481] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:12.371 [2024-10-08 09:20:03.747411] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 71214 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71214 ']' 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71214 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71214 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71214' 00:14:12.371 killing process with pid 71214 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71214 00:14:12.371 09:20:03 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71214 00:14:13.314 [2024-10-08 09:20:04.840716] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:13.314 [2024-10-08 09:20:04.870414] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:13.314 [2024-10-08 09:20:04.870516] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:13.314 [2024-10-08 09:20:04.882399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:13.314 [2024-10-08 09:20:04.882439] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:13.314 [2024-10-08 09:20:04.882445] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:13.314 [2024-10-08 09:20:04.882476] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:13.314 [2024-10-08 09:20:04.882582] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:14.706 09:20:06 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:14.706 ************************************ 00:14:14.706 END TEST test_save_ublk_config 00:14:14.706 ************************************ 00:14:14.706 00:14:14.706 real 0m7.551s 00:14:14.706 user 0m5.201s 00:14:14.706 sys 0m2.966s 00:14:14.706 09:20:06 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:14.706 09:20:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:14.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.964 09:20:06 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71290 00:14:14.964 09:20:06 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:14.964 09:20:06 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.964 09:20:06 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71290 00:14:14.964 09:20:06 ublk -- common/autotest_common.sh@831 -- # '[' -z 71290 ']' 00:14:14.964 09:20:06 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.964 09:20:06 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.964 09:20:06 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.964 09:20:06 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.964 09:20:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.964 [2024-10-08 09:20:06.467172] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:14.964 [2024-10-08 09:20:06.467307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71290 ] 00:14:14.964 [2024-10-08 09:20:06.609312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:15.221 [2024-10-08 09:20:06.791416] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.221 [2024-10-08 09:20:06.791476] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.787 09:20:07 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.787 09:20:07 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:15.787 09:20:07 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:15.787 09:20:07 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:15.787 09:20:07 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.787 09:20:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.787 ************************************ 00:14:15.787 START TEST test_create_ublk 00:14:15.787 ************************************ 00:14:15.787 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:15.787 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:15.787 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.787 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.787 [2024-10-08 09:20:07.397417] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:15.787 [2024-10-08 09:20:07.398943] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:15.787 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.787 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:15.787 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:15.787 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.787 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.045 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.045 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:16.045 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:16.045 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.045 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.045 [2024-10-08 09:20:07.597550] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:16.045 [2024-10-08 09:20:07.597921] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:16.045 [2024-10-08 09:20:07.597935] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:16.045 [2024-10-08 09:20:07.597942] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:16.045 [2024-10-08 09:20:07.606598] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:16.045 [2024-10-08 09:20:07.606621] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:16.045 [2024-10-08 09:20:07.613438] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:16.045 [2024-10-08 09:20:07.614055] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:16.045 [2024-10-08 09:20:07.631428] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:16.045 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.045 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:16.045 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:16.045 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:16.045 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.045 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.045 09:20:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.045 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:16.045 { 00:14:16.045 "ublk_device": "/dev/ublkb0", 00:14:16.045 "id": 0, 00:14:16.045 "queue_depth": 512, 00:14:16.045 "num_queues": 4, 00:14:16.045 "bdev_name": "Malloc0" 00:14:16.045 } 00:14:16.045 ]' 00:14:16.045 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:16.045 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:16.045 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:16.045 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:16.045 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:16.304 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:16.304 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:16.304 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:16.304 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:16.304 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:16.304 09:20:07 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:16.304 09:20:07 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:16.304 09:20:07 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:16.304 09:20:07 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:16.304 09:20:07 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:16.304 09:20:07 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:16.304 09:20:07 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:16.304 09:20:07 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:16.304 09:20:07 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:16.304 09:20:07 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:16.304 09:20:07 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:16.304 09:20:07 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:16.304 fio: verification read phase will never start because write phase uses all of runtime 00:14:16.304 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:16.304 fio-3.35 00:14:16.304 Starting 1 process 00:14:28.518 00:14:28.518 fio_test: (groupid=0, jobs=1): err= 0: pid=71335: Tue Oct 8 09:20:18 2024 00:14:28.518 write: IOPS=15.4k, BW=60.1MiB/s (63.0MB/s)(601MiB/10001msec); 0 zone resets 00:14:28.518 clat (usec): min=35, max=11776, avg=64.28, stdev=142.10 00:14:28.518 lat (usec): min=36, max=11794, avg=64.69, stdev=142.12 00:14:28.518 clat percentiles (usec): 00:14:28.518 | 1.00th=[ 42], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 50], 00:14:28.518 | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 61], 00:14:28.518 | 70.00th=[ 63], 80.00th=[ 66], 90.00th=[ 70], 95.00th=[ 73], 00:14:28.518 | 99.00th=[ 82], 99.50th=[ 93], 99.90th=[ 3195], 99.95th=[ 3556], 00:14:28.518 | 99.99th=[ 3851] 00:14:28.518 bw ( KiB/s): min=27032, max=80360, per=99.68%, avg=61341.05, stdev=13097.68, samples=19 00:14:28.518 iops : min= 6758, max=20090, avg=15335.26, stdev=3274.42, samples=19 00:14:28.518 lat (usec) : 50=21.72%, 100=77.84%, 250=0.16%, 500=0.02%, 750=0.01% 00:14:28.518 lat (usec) : 1000=0.02% 00:14:28.518 lat (msec) : 2=0.07%, 4=0.17%, 10=0.01%, 20=0.01% 00:14:28.518 cpu : usr=2.43%, sys=13.05%, ctx=153859, majf=0, minf=797 00:14:28.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:28.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.518 issued rwts: total=0,153860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:28.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:28.518 00:14:28.518 Run status group 0 (all jobs): 00:14:28.518 WRITE: bw=60.1MiB/s (63.0MB/s), 60.1MiB/s-60.1MiB/s (63.0MB/s-63.0MB/s), io=601MiB (630MB), run=10001-10001msec 00:14:28.518 00:14:28.518 Disk stats (read/write): 00:14:28.518 ublkb0: ios=0/152129, merge=0/0, ticks=0/8333, in_queue=8334, util=99.06% 00:14:28.518 09:20:18 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:28.518 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.518 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.518 [2024-10-08 09:20:18.047325] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.518 [2024-10-08 09:20:18.094444] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.518 [2024-10-08 09:20:18.095027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.518 [2024-10-08 09:20:18.102417] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.518 [2024-10-08 09:20:18.102659] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:28.518 [2024-10-08 09:20:18.102669] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.519 09:20:18 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 [2024-10-08 09:20:18.118470] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:28.519 request: 00:14:28.519 { 00:14:28.519 "ublk_id": 0, 00:14:28.519 "method": "ublk_stop_disk", 00:14:28.519 "req_id": 1 00:14:28.519 } 00:14:28.519 Got JSON-RPC error response 00:14:28.519 response: 00:14:28.519 { 00:14:28.519 "code": -19, 00:14:28.519 "message": "No such device" 00:14:28.519 } 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.519 09:20:18 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 [2024-10-08 09:20:18.142472] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:28.519 [2024-10-08 09:20:18.144363] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:28.519 [2024-10-08 09:20:18.144400] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.519 09:20:18 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.519 09:20:18 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:28.519 09:20:18 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.519 09:20:18 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:28.519 09:20:18 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:28.519 09:20:18 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:28.519 09:20:18 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.519 09:20:18 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:28.519 09:20:18 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:28.519 09:20:18 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:28.519 00:14:28.519 real 0m11.235s 00:14:28.519 user 0m0.549s 00:14:28.519 sys 0m1.372s 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:28.519 09:20:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 ************************************ 00:14:28.519 END TEST test_create_ublk 00:14:28.519 ************************************ 00:14:28.519 09:20:18 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:28.519 09:20:18 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:28.519 09:20:18 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:28.519 09:20:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 ************************************ 00:14:28.519 START TEST test_create_multi_ublk 00:14:28.519 ************************************ 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 [2024-10-08 09:20:18.674404] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:28.519 [2024-10-08 09:20:18.675709] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 [2024-10-08 09:20:18.914532] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:28.519 [2024-10-08 09:20:18.914864] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:28.519 [2024-10-08 09:20:18.914876] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:28.519 [2024-10-08 09:20:18.914886] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.519 [2024-10-08 09:20:18.938416] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.519 [2024-10-08 09:20:18.938441] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.519 [2024-10-08 09:20:18.950410] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.519 [2024-10-08 09:20:18.950954] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:28.519 [2024-10-08 09:20:18.986413] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:28.519 09:20:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 [2024-10-08 09:20:19.250523] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:28.519 [2024-10-08 09:20:19.250838] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:28.519 [2024-10-08 09:20:19.250852] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:28.519 [2024-10-08 09:20:19.250865] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.519 [2024-10-08 09:20:19.262428] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.519 [2024-10-08 09:20:19.262446] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.519 [2024-10-08 09:20:19.274410] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.519 [2024-10-08 09:20:19.274939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:28.519 [2024-10-08 09:20:19.314412] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.519 [2024-10-08 09:20:19.566781] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:28.519 [2024-10-08 09:20:19.567095] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:28.519 [2024-10-08 09:20:19.567107] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:28.519 [2024-10-08 09:20:19.567114] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.519 [2024-10-08 09:20:19.578423] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.519 [2024-10-08 09:20:19.578445] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.519 [2024-10-08 09:20:19.590407] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.519 [2024-10-08 09:20:19.590951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:28.519 [2024-10-08 09:20:19.599426] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.519 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.520 [2024-10-08 09:20:19.774514] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:28.520 [2024-10-08 09:20:19.774828] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:28.520 [2024-10-08 09:20:19.774842] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:28.520 [2024-10-08 09:20:19.774848] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.520 [2024-10-08 09:20:19.782431] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.520 [2024-10-08 09:20:19.782449] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.520 [2024-10-08 09:20:19.790424] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.520 [2024-10-08 09:20:19.790958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:28.520 [2024-10-08 09:20:19.794158] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:28.520 { 00:14:28.520 "ublk_device": "/dev/ublkb0", 00:14:28.520 "id": 0, 00:14:28.520 "queue_depth": 512, 00:14:28.520 "num_queues": 4, 00:14:28.520 "bdev_name": "Malloc0" 00:14:28.520 }, 00:14:28.520 { 00:14:28.520 "ublk_device": "/dev/ublkb1", 00:14:28.520 "id": 1, 00:14:28.520 "queue_depth": 512, 00:14:28.520 "num_queues": 4, 00:14:28.520 "bdev_name": "Malloc1" 00:14:28.520 }, 00:14:28.520 { 00:14:28.520 "ublk_device": "/dev/ublkb2", 00:14:28.520 "id": 2, 00:14:28.520 "queue_depth": 512, 00:14:28.520 "num_queues": 4, 00:14:28.520 "bdev_name": "Malloc2" 00:14:28.520 }, 00:14:28.520 { 00:14:28.520 "ublk_device": "/dev/ublkb3", 00:14:28.520 "id": 3, 00:14:28.520 "queue_depth": 512, 00:14:28.520 "num_queues": 4, 00:14:28.520 "bdev_name": "Malloc3" 00:14:28.520 } 00:14:28.520 ]' 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:28.520 09:20:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:28.520 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.778 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.778 [2024-10-08 09:20:20.451486] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:29.037 [2024-10-08 09:20:20.495436] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:29.037 [2024-10-08 09:20:20.496111] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:29.037 [2024-10-08 09:20:20.499549] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:29.037 [2024-10-08 09:20:20.499793] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:29.037 [2024-10-08 09:20:20.499807] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:29.037 [2024-10-08 09:20:20.518469] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:29.037 [2024-10-08 09:20:20.551858] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:29.037 [2024-10-08 09:20:20.552793] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:29.037 [2024-10-08 09:20:20.558413] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:29.037 [2024-10-08 09:20:20.558630] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:29.037 [2024-10-08 09:20:20.558643] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:29.037 [2024-10-08 09:20:20.572493] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:29.037 [2024-10-08 09:20:20.609946] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:29.037 [2024-10-08 09:20:20.610785] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:29.037 [2024-10-08 09:20:20.619456] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:29.037 [2024-10-08 09:20:20.619679] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:29.037 [2024-10-08 09:20:20.619691] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:29.037 [2024-10-08 09:20:20.634469] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:29.037 [2024-10-08 09:20:20.678404] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:29.037 [2024-10-08 09:20:20.679006] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:29.037 [2024-10-08 09:20:20.682711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:29.037 [2024-10-08 09:20:20.682925] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:29.037 [2024-10-08 09:20:20.682938] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.037 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:29.295 [2024-10-08 09:20:20.877462] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:29.295 [2024-10-08 09:20:20.879289] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:29.295 [2024-10-08 09:20:20.879316] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:29.295 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:29.295 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.295 09:20:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:29.295 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.295 09:20:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:29.862 09:20:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.862 09:20:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.862 09:20:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:29.862 09:20:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.862 09:20:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.120 09:20:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.120 09:20:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.120 09:20:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:30.120 09:20:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.120 09:20:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.379 09:20:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.379 09:20:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.379 09:20:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:30.379 09:20:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.379 09:20:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.379 09:20:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.379 09:20:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:30.379 09:20:22 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:30.379 09:20:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.379 09:20:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.379 09:20:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.379 09:20:22 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:30.379 09:20:22 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:30.637 09:20:22 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:30.637 09:20:22 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:30.637 09:20:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.637 09:20:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.637 09:20:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.637 09:20:22 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:30.637 09:20:22 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:30.637 09:20:22 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:30.637 00:14:30.637 real 0m3.470s 00:14:30.637 user 0m0.815s 00:14:30.637 sys 0m0.142s 00:14:30.637 09:20:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:30.637 09:20:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.637 ************************************ 00:14:30.637 END TEST test_create_multi_ublk 00:14:30.637 ************************************ 00:14:30.637 09:20:22 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:30.637 09:20:22 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:30.637 09:20:22 ublk -- ublk/ublk.sh@130 -- # killprocess 71290 00:14:30.637 09:20:22 ublk -- common/autotest_common.sh@950 -- # '[' -z 71290 ']' 00:14:30.637 09:20:22 ublk -- common/autotest_common.sh@954 -- # kill -0 71290 00:14:30.637 09:20:22 ublk -- common/autotest_common.sh@955 -- # uname 00:14:30.637 09:20:22 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:30.637 09:20:22 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71290 00:14:30.637 09:20:22 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:30.637 09:20:22 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:30.637 killing process with pid 71290 00:14:30.637 09:20:22 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71290' 00:14:30.637 09:20:22 ublk -- common/autotest_common.sh@969 -- # kill 71290 00:14:30.637 09:20:22 ublk -- common/autotest_common.sh@974 -- # wait 71290 00:14:31.203 [2024-10-08 09:20:22.766554] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:31.203 [2024-10-08 09:20:22.766617] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:32.140 00:14:32.140 real 0m24.874s 00:14:32.140 user 0m35.313s 00:14:32.140 sys 0m9.715s 00:14:32.140 09:20:23 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.140 ************************************ 00:14:32.140 END TEST ublk 00:14:32.140 ************************************ 00:14:32.140 09:20:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.140 09:20:23 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:32.140 09:20:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:32.140 09:20:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.140 09:20:23 -- common/autotest_common.sh@10 -- # set +x 00:14:32.140 ************************************ 00:14:32.140 START TEST ublk_recovery 00:14:32.140 ************************************ 00:14:32.140 09:20:23 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:32.140 * Looking for test storage... 00:14:32.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:32.140 09:20:23 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:32.140 09:20:23 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:32.140 09:20:23 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:32.140 09:20:23 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:32.140 09:20:23 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:32.140 09:20:23 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:32.140 09:20:23 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:32.141 09:20:23 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:32.141 09:20:23 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:32.141 09:20:23 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:32.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.141 --rc genhtml_branch_coverage=1 00:14:32.141 --rc genhtml_function_coverage=1 00:14:32.141 --rc genhtml_legend=1 00:14:32.141 --rc geninfo_all_blocks=1 00:14:32.141 --rc geninfo_unexecuted_blocks=1 00:14:32.141 00:14:32.141 ' 00:14:32.141 09:20:23 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:32.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.141 --rc genhtml_branch_coverage=1 00:14:32.141 --rc genhtml_function_coverage=1 00:14:32.141 --rc genhtml_legend=1 00:14:32.141 --rc geninfo_all_blocks=1 00:14:32.141 --rc geninfo_unexecuted_blocks=1 00:14:32.141 00:14:32.141 ' 00:14:32.141 09:20:23 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:32.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.141 --rc genhtml_branch_coverage=1 00:14:32.141 --rc genhtml_function_coverage=1 00:14:32.141 --rc genhtml_legend=1 00:14:32.141 --rc geninfo_all_blocks=1 00:14:32.141 --rc geninfo_unexecuted_blocks=1 00:14:32.141 00:14:32.141 ' 00:14:32.141 09:20:23 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:32.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.141 --rc genhtml_branch_coverage=1 00:14:32.141 --rc genhtml_function_coverage=1 00:14:32.141 --rc genhtml_legend=1 00:14:32.141 --rc geninfo_all_blocks=1 00:14:32.141 --rc geninfo_unexecuted_blocks=1 00:14:32.141 00:14:32.141 ' 00:14:32.141 09:20:23 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:32.141 09:20:23 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:32.141 09:20:23 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:32.141 09:20:23 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:32.141 09:20:23 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:32.141 09:20:23 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:32.141 09:20:23 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:32.141 09:20:23 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:32.141 09:20:23 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:32.141 09:20:23 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:32.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.141 09:20:23 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71687 00:14:32.141 09:20:23 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:32.141 09:20:23 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71687 00:14:32.141 09:20:23 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71687 ']' 00:14:32.141 09:20:23 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.141 09:20:23 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.141 09:20:23 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.141 09:20:23 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.141 09:20:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.141 09:20:23 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:32.141 [2024-10-08 09:20:23.814253] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:32.141 [2024-10-08 09:20:23.814431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71687 ] 00:14:32.402 [2024-10-08 09:20:23.969039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:32.661 [2024-10-08 09:20:24.202861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.661 [2024-10-08 09:20:24.202935] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.228 09:20:24 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.228 09:20:24 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:33.228 09:20:24 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:33.228 09:20:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.228 09:20:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.228 [2024-10-08 09:20:24.840413] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:33.228 [2024-10-08 09:20:24.841989] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:33.228 09:20:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.228 09:20:24 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:33.228 09:20:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.228 09:20:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.486 malloc0 00:14:33.486 09:20:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.486 09:20:24 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:33.486 09:20:24 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.486 09:20:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.486 [2024-10-08 09:20:24.952542] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:33.486 [2024-10-08 09:20:24.952645] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:33.486 [2024-10-08 09:20:24.952657] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:33.486 [2024-10-08 09:20:24.952665] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:33.486 [2024-10-08 09:20:24.961515] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:33.486 [2024-10-08 09:20:24.961535] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:33.486 [2024-10-08 09:20:24.968415] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:33.486 [2024-10-08 09:20:24.968561] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:33.486 [2024-10-08 09:20:24.981413] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:33.486 1 00:14:33.486 09:20:24 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.486 09:20:24 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:34.420 09:20:25 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71724 00:14:34.420 09:20:25 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:34.420 09:20:25 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:34.420 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:34.420 fio-3.35 00:14:34.420 Starting 1 process 00:14:39.692 09:20:30 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71687 00:14:39.692 09:20:30 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:44.977 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71687 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:44.977 09:20:36 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71834 00:14:44.977 09:20:36 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:44.977 09:20:36 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71834 00:14:44.977 09:20:36 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71834 ']' 00:14:44.977 09:20:36 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.977 09:20:36 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.977 09:20:36 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.977 09:20:36 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.977 09:20:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.977 09:20:36 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:44.977 [2024-10-08 09:20:36.086417] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:14:44.977 [2024-10-08 09:20:36.086539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71834 ] 00:14:44.977 [2024-10-08 09:20:36.235341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:44.977 [2024-10-08 09:20:36.405406] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.977 [2024-10-08 09:20:36.405450] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.544 09:20:36 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.544 09:20:36 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:45.544 09:20:36 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:45.544 09:20:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.544 09:20:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.544 [2024-10-08 09:20:36.951409] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:45.544 [2024-10-08 09:20:36.952715] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:45.544 09:20:36 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.544 09:20:36 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:45.544 09:20:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.544 09:20:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.544 malloc0 00:14:45.544 09:20:37 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.544 09:20:37 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:45.544 09:20:37 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.544 09:20:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.544 [2024-10-08 09:20:37.047533] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:45.544 [2024-10-08 09:20:37.047569] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:45.544 [2024-10-08 09:20:37.047578] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:45.544 [2024-10-08 09:20:37.055441] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:45.544 [2024-10-08 09:20:37.055460] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:45.544 [2024-10-08 09:20:37.055467] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:45.544 [2024-10-08 09:20:37.055537] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:45.544 1 00:14:45.544 09:20:37 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.544 09:20:37 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71724 00:14:45.544 [2024-10-08 09:20:37.063421] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:45.544 [2024-10-08 09:20:37.069767] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:45.544 [2024-10-08 09:20:37.077587] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:45.544 [2024-10-08 09:20:37.077606] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:41.764 00:15:41.764 fio_test: (groupid=0, jobs=1): err= 0: pid=71733: Tue Oct 8 09:21:26 2024 00:15:41.764 read: IOPS=25.8k, BW=101MiB/s (106MB/s)(6040MiB/60002msec) 00:15:41.764 slat (nsec): min=926, max=253659, avg=5347.50, stdev=1537.49 00:15:41.764 clat (usec): min=573, max=6090.6k, avg=2445.38, stdev=39770.51 00:15:41.764 lat (usec): min=578, max=6090.6k, avg=2450.73, stdev=39770.50 00:15:41.764 clat percentiles (usec): 00:15:41.764 | 1.00th=[ 1811], 5.00th=[ 1926], 10.00th=[ 1958], 20.00th=[ 1975], 00:15:41.764 | 30.00th=[ 2008], 40.00th=[ 2024], 50.00th=[ 2057], 60.00th=[ 2089], 00:15:41.764 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2245], 95.00th=[ 3064], 00:15:41.764 | 99.00th=[ 5014], 99.50th=[ 5473], 99.90th=[ 6915], 99.95th=[ 8029], 00:15:41.764 | 99.99th=[13042] 00:15:41.764 bw ( KiB/s): min=12496, max=122824, per=100.00%, avg=113601.30, stdev=13859.27, samples=108 00:15:41.764 iops : min= 3124, max=30706, avg=28400.31, stdev=3464.82, samples=108 00:15:41.764 write: IOPS=25.7k, BW=101MiB/s (105MB/s)(6032MiB/60002msec); 0 zone resets 00:15:41.764 slat (nsec): min=949, max=173752, avg=5476.32, stdev=1525.20 00:15:41.764 clat (usec): min=587, max=6090.7k, avg=2513.13, stdev=38570.34 00:15:41.764 lat (usec): min=592, max=6090.7k, avg=2518.60, stdev=38570.34 00:15:41.764 clat percentiles (usec): 00:15:41.764 | 1.00th=[ 1844], 5.00th=[ 2008], 10.00th=[ 2040], 20.00th=[ 2073], 00:15:41.764 | 30.00th=[ 2114], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2180], 00:15:41.764 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2343], 95.00th=[ 2999], 00:15:41.764 | 99.00th=[ 5014], 99.50th=[ 5538], 99.90th=[ 6980], 99.95th=[ 8160], 00:15:41.764 | 99.99th=[13042] 00:15:41.764 bw ( KiB/s): min=12160, max=122232, per=100.00%, avg=113457.91, stdev=13830.19, samples=108 00:15:41.764 iops : min= 3040, max=30558, avg=28364.47, stdev=3457.54, samples=108 00:15:41.764 lat (usec) : 750=0.01%, 1000=0.01% 00:15:41.764 lat (msec) : 2=15.91%, 4=81.57%, 10=2.50%, 20=0.01%, >=2000=0.01% 00:15:41.764 cpu : usr=5.81%, sys=28.67%, ctx=102157, majf=0, minf=14 00:15:41.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:41.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.764 issued rwts: total=1546149,1544284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.764 00:15:41.764 Run status group 0 (all jobs): 00:15:41.764 READ: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=6040MiB (6333MB), run=60002-60002msec 00:15:41.764 WRITE: bw=101MiB/s (105MB/s), 101MiB/s-101MiB/s (105MB/s-105MB/s), io=6032MiB (6325MB), run=60002-60002msec 00:15:41.764 00:15:41.764 Disk stats (read/write): 00:15:41.764 ublkb1: ios=1543159/1541305, merge=0/0, ticks=3683974/3653136, in_queue=7337111, util=99.90% 00:15:41.764 09:21:26 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.764 [2024-10-08 09:21:26.244125] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:41.764 [2024-10-08 09:21:26.282569] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:41.764 [2024-10-08 09:21:26.286574] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:41.764 [2024-10-08 09:21:26.296415] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:41.764 [2024-10-08 09:21:26.296548] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:41.764 [2024-10-08 09:21:26.296560] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.764 09:21:26 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.764 [2024-10-08 09:21:26.303514] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:41.764 [2024-10-08 09:21:26.305722] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:41.764 [2024-10-08 09:21:26.305753] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.764 09:21:26 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:41.764 09:21:26 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:41.764 09:21:26 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71834 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 71834 ']' 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 71834 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71834 00:15:41.764 killing process with pid 71834 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71834' 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@969 -- # kill 71834 00:15:41.764 09:21:26 ublk_recovery -- common/autotest_common.sh@974 -- # wait 71834 00:15:41.764 [2024-10-08 09:21:27.413112] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:41.764 [2024-10-08 09:21:27.413405] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:41.764 ************************************ 00:15:41.764 END TEST ublk_recovery 00:15:41.764 ************************************ 00:15:41.764 00:15:41.764 real 1m4.670s 00:15:41.764 user 1m41.264s 00:15:41.764 sys 0m37.668s 00:15:41.764 09:21:28 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.764 09:21:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.764 09:21:28 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:41.764 09:21:28 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:41.764 09:21:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:41.764 09:21:28 -- common/autotest_common.sh@10 -- # set +x 00:15:41.764 09:21:28 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:41.764 09:21:28 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:41.764 09:21:28 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:41.764 09:21:28 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:41.764 09:21:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:41.764 09:21:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:41.764 09:21:28 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:41.764 09:21:28 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:41.764 09:21:28 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:41.764 09:21:28 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:41.764 09:21:28 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:41.764 09:21:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:41.764 09:21:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.764 09:21:28 -- common/autotest_common.sh@10 -- # set +x 00:15:41.764 ************************************ 00:15:41.764 START TEST ftl 00:15:41.764 ************************************ 00:15:41.764 09:21:28 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:41.764 * Looking for test storage... 00:15:41.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.764 09:21:28 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:41.764 09:21:28 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:41.764 09:21:28 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:41.764 09:21:28 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:41.764 09:21:28 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.764 09:21:28 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.764 09:21:28 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.764 09:21:28 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.764 09:21:28 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.764 09:21:28 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.764 09:21:28 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.764 09:21:28 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.764 09:21:28 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.764 09:21:28 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.764 09:21:28 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.764 09:21:28 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:41.764 09:21:28 ftl -- scripts/common.sh@345 -- # : 1 00:15:41.764 09:21:28 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.764 09:21:28 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.764 09:21:28 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:41.764 09:21:28 ftl -- scripts/common.sh@353 -- # local d=1 00:15:41.764 09:21:28 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.764 09:21:28 ftl -- scripts/common.sh@355 -- # echo 1 00:15:41.764 09:21:28 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.764 09:21:28 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:41.764 09:21:28 ftl -- scripts/common.sh@353 -- # local d=2 00:15:41.764 09:21:28 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.764 09:21:28 ftl -- scripts/common.sh@355 -- # echo 2 00:15:41.764 09:21:28 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.764 09:21:28 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.764 09:21:28 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.765 09:21:28 ftl -- scripts/common.sh@368 -- # return 0 00:15:41.765 09:21:28 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.765 09:21:28 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:41.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.765 --rc genhtml_branch_coverage=1 00:15:41.765 --rc genhtml_function_coverage=1 00:15:41.765 --rc genhtml_legend=1 00:15:41.765 --rc geninfo_all_blocks=1 00:15:41.765 --rc geninfo_unexecuted_blocks=1 00:15:41.765 00:15:41.765 ' 00:15:41.765 09:21:28 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:41.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.765 --rc genhtml_branch_coverage=1 00:15:41.765 --rc genhtml_function_coverage=1 00:15:41.765 --rc genhtml_legend=1 00:15:41.765 --rc geninfo_all_blocks=1 00:15:41.765 --rc geninfo_unexecuted_blocks=1 00:15:41.765 00:15:41.765 ' 00:15:41.765 09:21:28 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:41.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.765 --rc genhtml_branch_coverage=1 00:15:41.765 --rc genhtml_function_coverage=1 00:15:41.765 --rc genhtml_legend=1 00:15:41.765 --rc geninfo_all_blocks=1 00:15:41.765 --rc geninfo_unexecuted_blocks=1 00:15:41.765 00:15:41.765 ' 00:15:41.765 09:21:28 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:41.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.765 --rc genhtml_branch_coverage=1 00:15:41.765 --rc genhtml_function_coverage=1 00:15:41.765 --rc genhtml_legend=1 00:15:41.765 --rc geninfo_all_blocks=1 00:15:41.765 --rc geninfo_unexecuted_blocks=1 00:15:41.765 00:15:41.765 ' 00:15:41.765 09:21:28 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:41.765 09:21:28 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:41.765 09:21:28 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.765 09:21:28 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.765 09:21:28 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:41.765 09:21:28 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:41.765 09:21:28 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.765 09:21:28 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:41.765 09:21:28 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:41.765 09:21:28 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.765 09:21:28 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.765 09:21:28 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:41.765 09:21:28 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:41.765 09:21:28 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:41.765 09:21:28 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:41.765 09:21:28 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:41.765 09:21:28 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:41.765 09:21:28 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.765 09:21:28 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.765 09:21:28 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:41.765 09:21:28 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:41.765 09:21:28 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:41.765 09:21:28 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:41.765 09:21:28 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:41.765 09:21:28 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:41.765 09:21:28 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:41.765 09:21:28 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:41.765 09:21:28 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:41.765 09:21:28 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:41.765 09:21:28 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.765 09:21:28 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:41.765 09:21:28 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:41.765 09:21:28 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:41.765 09:21:28 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:41.765 09:21:28 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:41.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:41.765 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:41.765 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:41.765 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:41.765 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:41.765 09:21:28 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72639 00:15:41.765 09:21:28 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:41.765 09:21:28 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72639 00:15:41.765 09:21:28 ftl -- common/autotest_common.sh@831 -- # '[' -z 72639 ']' 00:15:41.765 09:21:28 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.765 09:21:28 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.765 09:21:28 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.765 09:21:28 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.765 09:21:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:41.765 [2024-10-08 09:21:29.005515] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:15:41.765 [2024-10-08 09:21:29.005817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72639 ] 00:15:41.765 [2024-10-08 09:21:29.154199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.765 [2024-10-08 09:21:29.336708] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.765 09:21:29 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:41.765 09:21:29 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:41.765 09:21:29 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:41.765 09:21:30 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:41.765 09:21:30 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:41.765 09:21:30 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@50 -- # break 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@63 -- # break 00:15:41.765 09:21:31 ftl -- ftl/ftl.sh@66 -- # killprocess 72639 00:15:41.765 09:21:31 ftl -- common/autotest_common.sh@950 -- # '[' -z 72639 ']' 00:15:41.765 09:21:31 ftl -- common/autotest_common.sh@954 -- # kill -0 72639 00:15:41.765 09:21:31 ftl -- common/autotest_common.sh@955 -- # uname 00:15:41.765 09:21:31 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.765 09:21:31 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72639 00:15:41.765 killing process with pid 72639 00:15:41.765 09:21:31 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:41.765 09:21:31 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:41.765 09:21:31 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72639' 00:15:41.765 09:21:31 ftl -- common/autotest_common.sh@969 -- # kill 72639 00:15:41.765 09:21:31 ftl -- common/autotest_common.sh@974 -- # wait 72639 00:15:41.765 09:21:32 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:41.765 09:21:32 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:41.765 09:21:32 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:41.765 09:21:32 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.765 09:21:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:41.765 ************************************ 00:15:41.765 START TEST ftl_fio_basic 00:15:41.765 ************************************ 00:15:41.765 09:21:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:41.765 * Looking for test storage... 00:15:41.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:41.765 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:41.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.766 --rc genhtml_branch_coverage=1 00:15:41.766 --rc genhtml_function_coverage=1 00:15:41.766 --rc genhtml_legend=1 00:15:41.766 --rc geninfo_all_blocks=1 00:15:41.766 --rc geninfo_unexecuted_blocks=1 00:15:41.766 00:15:41.766 ' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:41.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.766 --rc genhtml_branch_coverage=1 00:15:41.766 --rc genhtml_function_coverage=1 00:15:41.766 --rc genhtml_legend=1 00:15:41.766 --rc geninfo_all_blocks=1 00:15:41.766 --rc geninfo_unexecuted_blocks=1 00:15:41.766 00:15:41.766 ' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:41.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.766 --rc genhtml_branch_coverage=1 00:15:41.766 --rc genhtml_function_coverage=1 00:15:41.766 --rc genhtml_legend=1 00:15:41.766 --rc geninfo_all_blocks=1 00:15:41.766 --rc geninfo_unexecuted_blocks=1 00:15:41.766 00:15:41.766 ' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:41.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.766 --rc genhtml_branch_coverage=1 00:15:41.766 --rc genhtml_function_coverage=1 00:15:41.766 --rc genhtml_legend=1 00:15:41.766 --rc geninfo_all_blocks=1 00:15:41.766 --rc geninfo_unexecuted_blocks=1 00:15:41.766 00:15:41.766 ' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72771 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72771 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 72771 ']' 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.766 09:21:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:41.766 [2024-10-08 09:21:33.216145] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:15:41.766 [2024-10-08 09:21:33.216486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72771 ] 00:15:41.766 [2024-10-08 09:21:33.366795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:42.025 [2024-10-08 09:21:33.551685] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.025 [2024-10-08 09:21:33.551965] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.025 [2024-10-08 09:21:33.551992] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.591 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:42.591 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:42.591 09:21:34 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:42.591 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:42.591 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:42.591 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:42.591 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:42.591 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:42.850 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:42.850 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:42.850 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:42.850 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:42.850 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:42.850 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:42.850 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:42.850 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:43.107 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:43.107 { 00:15:43.107 "name": "nvme0n1", 00:15:43.107 "aliases": [ 00:15:43.107 "abd3604c-51fa-445d-8e30-89ced0e53161" 00:15:43.107 ], 00:15:43.107 "product_name": "NVMe disk", 00:15:43.107 "block_size": 4096, 00:15:43.107 "num_blocks": 1310720, 00:15:43.107 "uuid": "abd3604c-51fa-445d-8e30-89ced0e53161", 00:15:43.107 "numa_id": -1, 00:15:43.107 "assigned_rate_limits": { 00:15:43.107 "rw_ios_per_sec": 0, 00:15:43.107 "rw_mbytes_per_sec": 0, 00:15:43.107 "r_mbytes_per_sec": 0, 00:15:43.107 "w_mbytes_per_sec": 0 00:15:43.107 }, 00:15:43.107 "claimed": false, 00:15:43.107 "zoned": false, 00:15:43.107 "supported_io_types": { 00:15:43.107 "read": true, 00:15:43.107 "write": true, 00:15:43.107 "unmap": true, 00:15:43.107 "flush": true, 00:15:43.107 "reset": true, 00:15:43.107 "nvme_admin": true, 00:15:43.107 "nvme_io": true, 00:15:43.107 "nvme_io_md": false, 00:15:43.107 "write_zeroes": true, 00:15:43.107 "zcopy": false, 00:15:43.107 "get_zone_info": false, 00:15:43.107 "zone_management": false, 00:15:43.107 "zone_append": false, 00:15:43.107 "compare": true, 00:15:43.107 "compare_and_write": false, 00:15:43.107 "abort": true, 00:15:43.107 "seek_hole": false, 00:15:43.107 "seek_data": false, 00:15:43.107 "copy": true, 00:15:43.107 "nvme_iov_md": false 00:15:43.107 }, 00:15:43.107 "driver_specific": { 00:15:43.107 "nvme": [ 00:15:43.107 { 00:15:43.107 "pci_address": "0000:00:11.0", 00:15:43.107 "trid": { 00:15:43.108 "trtype": "PCIe", 00:15:43.108 "traddr": "0000:00:11.0" 00:15:43.108 }, 00:15:43.108 "ctrlr_data": { 00:15:43.108 "cntlid": 0, 00:15:43.108 "vendor_id": "0x1b36", 00:15:43.108 "model_number": "QEMU NVMe Ctrl", 00:15:43.108 "serial_number": "12341", 00:15:43.108 "firmware_revision": "8.0.0", 00:15:43.108 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:43.108 "oacs": { 00:15:43.108 "security": 0, 00:15:43.108 "format": 1, 00:15:43.108 "firmware": 0, 00:15:43.108 "ns_manage": 1 00:15:43.108 }, 00:15:43.108 "multi_ctrlr": false, 00:15:43.108 "ana_reporting": false 00:15:43.108 }, 00:15:43.108 "vs": { 00:15:43.108 "nvme_version": "1.4" 00:15:43.108 }, 00:15:43.108 "ns_data": { 00:15:43.108 "id": 1, 00:15:43.108 "can_share": false 00:15:43.108 } 00:15:43.108 } 00:15:43.108 ], 00:15:43.108 "mp_policy": "active_passive" 00:15:43.108 } 00:15:43.108 } 00:15:43.108 ]' 00:15:43.108 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:43.108 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:43.108 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:43.108 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:43.108 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:43.108 09:21:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:43.108 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:43.108 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:43.108 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:43.108 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:43.108 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:43.382 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:43.382 09:21:34 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:43.382 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=9b5dfcde-cb3f-44b0-a098-d45a54be0312 00:15:43.382 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9b5dfcde-cb3f-44b0-a098-d45a54be0312 00:15:43.652 09:21:35 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=00721f18-0ee7-4dcc-8e37-52575ac5e373 00:15:43.652 09:21:35 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 00721f18-0ee7-4dcc-8e37-52575ac5e373 00:15:43.652 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:43.652 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:43.652 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=00721f18-0ee7-4dcc-8e37-52575ac5e373 00:15:43.652 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:43.652 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 00721f18-0ee7-4dcc-8e37-52575ac5e373 00:15:43.652 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=00721f18-0ee7-4dcc-8e37-52575ac5e373 00:15:43.652 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:43.652 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:43.652 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:43.652 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 00721f18-0ee7-4dcc-8e37-52575ac5e373 00:15:43.910 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:43.910 { 00:15:43.910 "name": "00721f18-0ee7-4dcc-8e37-52575ac5e373", 00:15:43.910 "aliases": [ 00:15:43.910 "lvs/nvme0n1p0" 00:15:43.910 ], 00:15:43.910 "product_name": "Logical Volume", 00:15:43.910 "block_size": 4096, 00:15:43.910 "num_blocks": 26476544, 00:15:43.910 "uuid": "00721f18-0ee7-4dcc-8e37-52575ac5e373", 00:15:43.910 "assigned_rate_limits": { 00:15:43.910 "rw_ios_per_sec": 0, 00:15:43.910 "rw_mbytes_per_sec": 0, 00:15:43.910 "r_mbytes_per_sec": 0, 00:15:43.910 "w_mbytes_per_sec": 0 00:15:43.910 }, 00:15:43.910 "claimed": false, 00:15:43.910 "zoned": false, 00:15:43.910 "supported_io_types": { 00:15:43.910 "read": true, 00:15:43.910 "write": true, 00:15:43.910 "unmap": true, 00:15:43.910 "flush": false, 00:15:43.910 "reset": true, 00:15:43.910 "nvme_admin": false, 00:15:43.910 "nvme_io": false, 00:15:43.910 "nvme_io_md": false, 00:15:43.910 "write_zeroes": true, 00:15:43.910 "zcopy": false, 00:15:43.910 "get_zone_info": false, 00:15:43.910 "zone_management": false, 00:15:43.910 "zone_append": false, 00:15:43.910 "compare": false, 00:15:43.910 "compare_and_write": false, 00:15:43.910 "abort": false, 00:15:43.910 "seek_hole": true, 00:15:43.910 "seek_data": true, 00:15:43.910 "copy": false, 00:15:43.910 "nvme_iov_md": false 00:15:43.910 }, 00:15:43.910 "driver_specific": { 00:15:43.910 "lvol": { 00:15:43.910 "lvol_store_uuid": "9b5dfcde-cb3f-44b0-a098-d45a54be0312", 00:15:43.910 "base_bdev": "nvme0n1", 00:15:43.910 "thin_provision": true, 00:15:43.910 "num_allocated_clusters": 0, 00:15:43.910 "snapshot": false, 00:15:43.910 "clone": false, 00:15:43.910 "esnap_clone": false 00:15:43.910 } 00:15:43.910 } 00:15:43.910 } 00:15:43.910 ]' 00:15:43.910 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:43.910 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:43.910 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:43.910 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:43.910 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:43.910 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:43.910 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:43.910 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:43.910 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:44.168 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:44.168 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:44.168 09:21:35 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 00721f18-0ee7-4dcc-8e37-52575ac5e373 00:15:44.168 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=00721f18-0ee7-4dcc-8e37-52575ac5e373 00:15:44.168 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:44.168 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:44.168 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:44.168 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 00721f18-0ee7-4dcc-8e37-52575ac5e373 00:15:44.426 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:44.426 { 00:15:44.426 "name": "00721f18-0ee7-4dcc-8e37-52575ac5e373", 00:15:44.426 "aliases": [ 00:15:44.426 "lvs/nvme0n1p0" 00:15:44.426 ], 00:15:44.426 "product_name": "Logical Volume", 00:15:44.426 "block_size": 4096, 00:15:44.426 "num_blocks": 26476544, 00:15:44.426 "uuid": "00721f18-0ee7-4dcc-8e37-52575ac5e373", 00:15:44.426 "assigned_rate_limits": { 00:15:44.426 "rw_ios_per_sec": 0, 00:15:44.426 "rw_mbytes_per_sec": 0, 00:15:44.426 "r_mbytes_per_sec": 0, 00:15:44.426 "w_mbytes_per_sec": 0 00:15:44.426 }, 00:15:44.426 "claimed": false, 00:15:44.426 "zoned": false, 00:15:44.426 "supported_io_types": { 00:15:44.426 "read": true, 00:15:44.426 "write": true, 00:15:44.426 "unmap": true, 00:15:44.426 "flush": false, 00:15:44.426 "reset": true, 00:15:44.426 "nvme_admin": false, 00:15:44.426 "nvme_io": false, 00:15:44.426 "nvme_io_md": false, 00:15:44.426 "write_zeroes": true, 00:15:44.426 "zcopy": false, 00:15:44.426 "get_zone_info": false, 00:15:44.426 "zone_management": false, 00:15:44.426 "zone_append": false, 00:15:44.426 "compare": false, 00:15:44.426 "compare_and_write": false, 00:15:44.426 "abort": false, 00:15:44.426 "seek_hole": true, 00:15:44.426 "seek_data": true, 00:15:44.426 "copy": false, 00:15:44.426 "nvme_iov_md": false 00:15:44.426 }, 00:15:44.426 "driver_specific": { 00:15:44.426 "lvol": { 00:15:44.426 "lvol_store_uuid": "9b5dfcde-cb3f-44b0-a098-d45a54be0312", 00:15:44.426 "base_bdev": "nvme0n1", 00:15:44.426 "thin_provision": true, 00:15:44.426 "num_allocated_clusters": 0, 00:15:44.426 "snapshot": false, 00:15:44.426 "clone": false, 00:15:44.426 "esnap_clone": false 00:15:44.426 } 00:15:44.426 } 00:15:44.426 } 00:15:44.426 ]' 00:15:44.426 09:21:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:44.426 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:44.426 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:44.426 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:44.426 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:44.426 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:44.426 09:21:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:44.426 09:21:36 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:44.684 09:21:36 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:44.684 09:21:36 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:44.684 09:21:36 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:44.684 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:44.684 09:21:36 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 00721f18-0ee7-4dcc-8e37-52575ac5e373 00:15:44.684 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=00721f18-0ee7-4dcc-8e37-52575ac5e373 00:15:44.684 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:44.684 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:44.684 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:44.684 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 00721f18-0ee7-4dcc-8e37-52575ac5e373 00:15:44.942 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:44.942 { 00:15:44.942 "name": "00721f18-0ee7-4dcc-8e37-52575ac5e373", 00:15:44.942 "aliases": [ 00:15:44.942 "lvs/nvme0n1p0" 00:15:44.942 ], 00:15:44.942 "product_name": "Logical Volume", 00:15:44.942 "block_size": 4096, 00:15:44.942 "num_blocks": 26476544, 00:15:44.942 "uuid": "00721f18-0ee7-4dcc-8e37-52575ac5e373", 00:15:44.942 "assigned_rate_limits": { 00:15:44.942 "rw_ios_per_sec": 0, 00:15:44.942 "rw_mbytes_per_sec": 0, 00:15:44.942 "r_mbytes_per_sec": 0, 00:15:44.942 "w_mbytes_per_sec": 0 00:15:44.942 }, 00:15:44.942 "claimed": false, 00:15:44.942 "zoned": false, 00:15:44.942 "supported_io_types": { 00:15:44.942 "read": true, 00:15:44.942 "write": true, 00:15:44.942 "unmap": true, 00:15:44.942 "flush": false, 00:15:44.942 "reset": true, 00:15:44.942 "nvme_admin": false, 00:15:44.942 "nvme_io": false, 00:15:44.942 "nvme_io_md": false, 00:15:44.942 "write_zeroes": true, 00:15:44.942 "zcopy": false, 00:15:44.942 "get_zone_info": false, 00:15:44.942 "zone_management": false, 00:15:44.942 "zone_append": false, 00:15:44.942 "compare": false, 00:15:44.942 "compare_and_write": false, 00:15:44.942 "abort": false, 00:15:44.942 "seek_hole": true, 00:15:44.942 "seek_data": true, 00:15:44.942 "copy": false, 00:15:44.942 "nvme_iov_md": false 00:15:44.942 }, 00:15:44.942 "driver_specific": { 00:15:44.942 "lvol": { 00:15:44.942 "lvol_store_uuid": "9b5dfcde-cb3f-44b0-a098-d45a54be0312", 00:15:44.942 "base_bdev": "nvme0n1", 00:15:44.942 "thin_provision": true, 00:15:44.942 "num_allocated_clusters": 0, 00:15:44.942 "snapshot": false, 00:15:44.942 "clone": false, 00:15:44.942 "esnap_clone": false 00:15:44.942 } 00:15:44.942 } 00:15:44.942 } 00:15:44.942 ]' 00:15:44.942 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:44.942 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:44.942 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:44.942 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:44.942 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:44.942 09:21:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:44.942 09:21:36 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:44.942 09:21:36 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:44.942 09:21:36 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 00721f18-0ee7-4dcc-8e37-52575ac5e373 -c nvc0n1p0 --l2p_dram_limit 60 00:15:45.201 [2024-10-08 09:21:36.709804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.201 [2024-10-08 09:21:36.709856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:45.201 [2024-10-08 09:21:36.709872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:45.201 [2024-10-08 09:21:36.709880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.201 [2024-10-08 09:21:36.709956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.201 [2024-10-08 09:21:36.709965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:45.201 [2024-10-08 09:21:36.709973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:45.201 [2024-10-08 09:21:36.709980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.201 [2024-10-08 09:21:36.710022] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:45.202 [2024-10-08 09:21:36.710632] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:45.202 [2024-10-08 09:21:36.710774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.202 [2024-10-08 09:21:36.710783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:45.202 [2024-10-08 09:21:36.710793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:15:45.202 [2024-10-08 09:21:36.710799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.202 [2024-10-08 09:21:36.710876] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a4878c0b-2581-44bf-9e0b-03d2250d2113 00:15:45.202 [2024-10-08 09:21:36.712203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.202 [2024-10-08 09:21:36.712238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:45.202 [2024-10-08 09:21:36.712248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:45.202 [2024-10-08 09:21:36.712257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.202 [2024-10-08 09:21:36.719100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.202 [2024-10-08 09:21:36.719129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:45.202 [2024-10-08 09:21:36.719137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.782 ms 00:15:45.202 [2024-10-08 09:21:36.719145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.202 [2024-10-08 09:21:36.719236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.202 [2024-10-08 09:21:36.719246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:45.202 [2024-10-08 09:21:36.719253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:15:45.202 [2024-10-08 09:21:36.719264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.202 [2024-10-08 09:21:36.719347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.202 [2024-10-08 09:21:36.719357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:45.202 [2024-10-08 09:21:36.719364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:15:45.202 [2024-10-08 09:21:36.719373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.202 [2024-10-08 09:21:36.719412] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:45.202 [2024-10-08 09:21:36.722673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.202 [2024-10-08 09:21:36.722698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:45.202 [2024-10-08 09:21:36.722708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:15:45.202 [2024-10-08 09:21:36.722714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.202 [2024-10-08 09:21:36.722758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.202 [2024-10-08 09:21:36.722766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:45.202 [2024-10-08 09:21:36.722774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:45.202 [2024-10-08 09:21:36.722780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.202 [2024-10-08 09:21:36.722814] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:45.202 [2024-10-08 09:21:36.722934] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:45.202 [2024-10-08 09:21:36.722948] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:45.202 [2024-10-08 09:21:36.722957] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:45.202 [2024-10-08 09:21:36.722967] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:45.202 [2024-10-08 09:21:36.722976] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:45.202 [2024-10-08 09:21:36.722985] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:45.202 [2024-10-08 09:21:36.722991] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:45.202 [2024-10-08 09:21:36.722998] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:45.202 [2024-10-08 09:21:36.723004] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:45.202 [2024-10-08 09:21:36.723012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.202 [2024-10-08 09:21:36.723018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:45.202 [2024-10-08 09:21:36.723025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:15:45.202 [2024-10-08 09:21:36.723031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.202 [2024-10-08 09:21:36.723105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.202 [2024-10-08 09:21:36.723112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:45.202 [2024-10-08 09:21:36.723122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:45.202 [2024-10-08 09:21:36.723128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.202 [2024-10-08 09:21:36.723218] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:45.202 [2024-10-08 09:21:36.723226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:45.202 [2024-10-08 09:21:36.723234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:45.202 [2024-10-08 09:21:36.723240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:45.202 [2024-10-08 09:21:36.723252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:45.202 [2024-10-08 09:21:36.723265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:45.202 [2024-10-08 09:21:36.723277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:45.202 [2024-10-08 09:21:36.723289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:45.202 [2024-10-08 09:21:36.723301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:45.202 [2024-10-08 09:21:36.723308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:45.202 [2024-10-08 09:21:36.723313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:45.202 [2024-10-08 09:21:36.723320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:45.202 [2024-10-08 09:21:36.723326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:45.202 [2024-10-08 09:21:36.723340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:45.202 [2024-10-08 09:21:36.723347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:45.202 [2024-10-08 09:21:36.723361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:45.202 [2024-10-08 09:21:36.723373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:45.202 [2024-10-08 09:21:36.723378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:45.202 [2024-10-08 09:21:36.723406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:45.202 [2024-10-08 09:21:36.723413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:45.202 [2024-10-08 09:21:36.723426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:45.202 [2024-10-08 09:21:36.723432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:45.202 [2024-10-08 09:21:36.723444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:45.202 [2024-10-08 09:21:36.723453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:45.202 [2024-10-08 09:21:36.723465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:45.202 [2024-10-08 09:21:36.723470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:45.202 [2024-10-08 09:21:36.723478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:45.202 [2024-10-08 09:21:36.723483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:45.202 [2024-10-08 09:21:36.723490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:45.202 [2024-10-08 09:21:36.723507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:45.202 [2024-10-08 09:21:36.723521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:45.202 [2024-10-08 09:21:36.723527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723532] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:45.202 [2024-10-08 09:21:36.723539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:45.202 [2024-10-08 09:21:36.723549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:45.202 [2024-10-08 09:21:36.723557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:45.202 [2024-10-08 09:21:36.723564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:45.202 [2024-10-08 09:21:36.723572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:45.202 [2024-10-08 09:21:36.723578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:45.202 [2024-10-08 09:21:36.723585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:45.202 [2024-10-08 09:21:36.723591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:45.202 [2024-10-08 09:21:36.723598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:45.202 [2024-10-08 09:21:36.723617] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:45.202 [2024-10-08 09:21:36.723626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:45.202 [2024-10-08 09:21:36.723633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:45.202 [2024-10-08 09:21:36.723640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:45.202 [2024-10-08 09:21:36.723645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:45.202 [2024-10-08 09:21:36.723652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:45.203 [2024-10-08 09:21:36.723657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:45.203 [2024-10-08 09:21:36.723664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:45.203 [2024-10-08 09:21:36.723669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:45.203 [2024-10-08 09:21:36.723676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:45.203 [2024-10-08 09:21:36.723681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:45.203 [2024-10-08 09:21:36.723691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:45.203 [2024-10-08 09:21:36.723696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:45.203 [2024-10-08 09:21:36.723704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:45.203 [2024-10-08 09:21:36.723710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:45.203 [2024-10-08 09:21:36.723719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:45.203 [2024-10-08 09:21:36.723725] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:45.203 [2024-10-08 09:21:36.723733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:45.203 [2024-10-08 09:21:36.723739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:45.203 [2024-10-08 09:21:36.723748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:45.203 [2024-10-08 09:21:36.723754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:45.203 [2024-10-08 09:21:36.723761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:45.203 [2024-10-08 09:21:36.723767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.203 [2024-10-08 09:21:36.723774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:45.203 [2024-10-08 09:21:36.723781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:15:45.203 [2024-10-08 09:21:36.723788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.203 [2024-10-08 09:21:36.723858] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:45.203 [2024-10-08 09:21:36.723872] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:47.734 [2024-10-08 09:21:39.258883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.734 [2024-10-08 09:21:39.258956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:47.734 [2024-10-08 09:21:39.258974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2535.012 ms 00:15:47.734 [2024-10-08 09:21:39.258985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.734 [2024-10-08 09:21:39.296709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.734 [2024-10-08 09:21:39.297019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:47.734 [2024-10-08 09:21:39.297049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.496 ms 00:15:47.734 [2024-10-08 09:21:39.297065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.734 [2024-10-08 09:21:39.297290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.734 [2024-10-08 09:21:39.297315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:47.734 [2024-10-08 09:21:39.297329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:15:47.734 [2024-10-08 09:21:39.297345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.734 [2024-10-08 09:21:39.331502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.734 [2024-10-08 09:21:39.331700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:47.734 [2024-10-08 09:21:39.331719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.067 ms 00:15:47.734 [2024-10-08 09:21:39.331731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.734 [2024-10-08 09:21:39.331789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.734 [2024-10-08 09:21:39.331801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:47.734 [2024-10-08 09:21:39.331810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:47.734 [2024-10-08 09:21:39.331823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.734 [2024-10-08 09:21:39.332274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.734 [2024-10-08 09:21:39.332294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:47.734 [2024-10-08 09:21:39.332304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:15:47.734 [2024-10-08 09:21:39.332314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.734 [2024-10-08 09:21:39.332476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.734 [2024-10-08 09:21:39.332488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:47.734 [2024-10-08 09:21:39.332497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:15:47.734 [2024-10-08 09:21:39.332509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.734 [2024-10-08 09:21:39.348497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.734 [2024-10-08 09:21:39.348546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:47.734 [2024-10-08 09:21:39.348557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.957 ms 00:15:47.734 [2024-10-08 09:21:39.348569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.734 [2024-10-08 09:21:39.360793] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:47.734 [2024-10-08 09:21:39.377899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.734 [2024-10-08 09:21:39.377941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:47.734 [2024-10-08 09:21:39.377955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.191 ms 00:15:47.734 [2024-10-08 09:21:39.377963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.993 [2024-10-08 09:21:39.428810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.993 [2024-10-08 09:21:39.428872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:47.993 [2024-10-08 09:21:39.428889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.788 ms 00:15:47.993 [2024-10-08 09:21:39.428897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.993 [2024-10-08 09:21:39.429096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.993 [2024-10-08 09:21:39.429108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:47.993 [2024-10-08 09:21:39.429121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:15:47.993 [2024-10-08 09:21:39.429132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.993 [2024-10-08 09:21:39.452401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.993 [2024-10-08 09:21:39.452445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:47.993 [2024-10-08 09:21:39.452460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.198 ms 00:15:47.993 [2024-10-08 09:21:39.452468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.993 [2024-10-08 09:21:39.475140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.993 [2024-10-08 09:21:39.475335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:47.993 [2024-10-08 09:21:39.475358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.618 ms 00:15:47.993 [2024-10-08 09:21:39.475365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.993 [2024-10-08 09:21:39.476265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.993 [2024-10-08 09:21:39.476323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:47.993 [2024-10-08 09:21:39.476338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:15:47.993 [2024-10-08 09:21:39.476347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.993 [2024-10-08 09:21:39.544566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.993 [2024-10-08 09:21:39.544626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:47.993 [2024-10-08 09:21:39.544647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.138 ms 00:15:47.993 [2024-10-08 09:21:39.544655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.993 [2024-10-08 09:21:39.569481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.993 [2024-10-08 09:21:39.569527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:47.993 [2024-10-08 09:21:39.569544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.719 ms 00:15:47.993 [2024-10-08 09:21:39.569553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.993 [2024-10-08 09:21:39.592837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.993 [2024-10-08 09:21:39.592876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:47.993 [2024-10-08 09:21:39.592890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.233 ms 00:15:47.993 [2024-10-08 09:21:39.592898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.993 [2024-10-08 09:21:39.616088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.993 [2024-10-08 09:21:39.616127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:47.993 [2024-10-08 09:21:39.616141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.137 ms 00:15:47.993 [2024-10-08 09:21:39.616150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.993 [2024-10-08 09:21:39.616198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.993 [2024-10-08 09:21:39.616209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:47.993 [2024-10-08 09:21:39.616223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:47.993 [2024-10-08 09:21:39.616231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.993 [2024-10-08 09:21:39.616326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.993 [2024-10-08 09:21:39.616336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:47.993 [2024-10-08 09:21:39.616347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:15:47.993 [2024-10-08 09:21:39.616354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.994 [2024-10-08 09:21:39.617404] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2907.113 ms, result 0 00:15:47.994 { 00:15:47.994 "name": "ftl0", 00:15:47.994 "uuid": "a4878c0b-2581-44bf-9e0b-03d2250d2113" 00:15:47.994 } 00:15:47.994 09:21:39 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:47.994 09:21:39 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:47.994 09:21:39 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:47.994 09:21:39 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:47.994 09:21:39 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:47.994 09:21:39 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:47.994 09:21:39 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:48.252 09:21:39 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:48.510 [ 00:15:48.510 { 00:15:48.510 "name": "ftl0", 00:15:48.510 "aliases": [ 00:15:48.510 "a4878c0b-2581-44bf-9e0b-03d2250d2113" 00:15:48.510 ], 00:15:48.510 "product_name": "FTL disk", 00:15:48.510 "block_size": 4096, 00:15:48.510 "num_blocks": 20971520, 00:15:48.510 "uuid": "a4878c0b-2581-44bf-9e0b-03d2250d2113", 00:15:48.510 "assigned_rate_limits": { 00:15:48.510 "rw_ios_per_sec": 0, 00:15:48.510 "rw_mbytes_per_sec": 0, 00:15:48.510 "r_mbytes_per_sec": 0, 00:15:48.510 "w_mbytes_per_sec": 0 00:15:48.510 }, 00:15:48.510 "claimed": false, 00:15:48.510 "zoned": false, 00:15:48.510 "supported_io_types": { 00:15:48.510 "read": true, 00:15:48.510 "write": true, 00:15:48.510 "unmap": true, 00:15:48.510 "flush": true, 00:15:48.510 "reset": false, 00:15:48.510 "nvme_admin": false, 00:15:48.510 "nvme_io": false, 00:15:48.510 "nvme_io_md": false, 00:15:48.510 "write_zeroes": true, 00:15:48.510 "zcopy": false, 00:15:48.510 "get_zone_info": false, 00:15:48.510 "zone_management": false, 00:15:48.510 "zone_append": false, 00:15:48.510 "compare": false, 00:15:48.510 "compare_and_write": false, 00:15:48.510 "abort": false, 00:15:48.510 "seek_hole": false, 00:15:48.510 "seek_data": false, 00:15:48.510 "copy": false, 00:15:48.510 "nvme_iov_md": false 00:15:48.510 }, 00:15:48.510 "driver_specific": { 00:15:48.510 "ftl": { 00:15:48.510 "base_bdev": "00721f18-0ee7-4dcc-8e37-52575ac5e373", 00:15:48.510 "cache": "nvc0n1p0" 00:15:48.510 } 00:15:48.510 } 00:15:48.510 } 00:15:48.510 ] 00:15:48.510 09:21:40 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:48.510 09:21:40 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:48.510 09:21:40 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:48.768 09:21:40 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:48.768 09:21:40 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:48.768 [2024-10-08 09:21:40.438158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.768 [2024-10-08 09:21:40.438222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:48.768 [2024-10-08 09:21:40.438236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:48.768 [2024-10-08 09:21:40.438247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.768 [2024-10-08 09:21:40.438279] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:48.768 [2024-10-08 09:21:40.441107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.768 [2024-10-08 09:21:40.441140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:48.768 [2024-10-08 09:21:40.441153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.804 ms 00:15:48.768 [2024-10-08 09:21:40.441162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.768 [2024-10-08 09:21:40.441667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.768 [2024-10-08 09:21:40.441684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:48.768 [2024-10-08 09:21:40.441695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:15:48.768 [2024-10-08 09:21:40.441703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.768 [2024-10-08 09:21:40.444957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.768 [2024-10-08 09:21:40.445105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:48.768 [2024-10-08 09:21:40.445123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:15:48.768 [2024-10-08 09:21:40.445132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.768 [2024-10-08 09:21:40.451328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.768 [2024-10-08 09:21:40.451357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:48.768 [2024-10-08 09:21:40.451368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.167 ms 00:15:48.768 [2024-10-08 09:21:40.451376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.027 [2024-10-08 09:21:40.475688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.027 [2024-10-08 09:21:40.475725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:49.027 [2024-10-08 09:21:40.475738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.194 ms 00:15:49.027 [2024-10-08 09:21:40.475747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.027 [2024-10-08 09:21:40.491039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.027 [2024-10-08 09:21:40.491206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:49.027 [2024-10-08 09:21:40.491227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.240 ms 00:15:49.027 [2024-10-08 09:21:40.491236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.027 [2024-10-08 09:21:40.491463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.027 [2024-10-08 09:21:40.491477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:49.027 [2024-10-08 09:21:40.491488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:15:49.027 [2024-10-08 09:21:40.491496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.027 [2024-10-08 09:21:40.515037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.027 [2024-10-08 09:21:40.515070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:49.027 [2024-10-08 09:21:40.515082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.511 ms 00:15:49.027 [2024-10-08 09:21:40.515090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.027 [2024-10-08 09:21:40.537987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.027 [2024-10-08 09:21:40.538118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:49.027 [2024-10-08 09:21:40.538139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.848 ms 00:15:49.027 [2024-10-08 09:21:40.538147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.027 [2024-10-08 09:21:40.560184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.027 [2024-10-08 09:21:40.560216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:49.027 [2024-10-08 09:21:40.560228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.990 ms 00:15:49.027 [2024-10-08 09:21:40.560235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.027 [2024-10-08 09:21:40.582671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.027 [2024-10-08 09:21:40.582704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:49.027 [2024-10-08 09:21:40.582717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.339 ms 00:15:49.027 [2024-10-08 09:21:40.582725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.027 [2024-10-08 09:21:40.582767] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:49.027 [2024-10-08 09:21:40.582783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:49.027 [2024-10-08 09:21:40.582798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:49.027 [2024-10-08 09:21:40.582806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:49.027 [2024-10-08 09:21:40.582816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:49.027 [2024-10-08 09:21:40.582825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:49.027 [2024-10-08 09:21:40.582835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.582995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:49.028 [2024-10-08 09:21:40.583718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:49.029 [2024-10-08 09:21:40.583727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:49.029 [2024-10-08 09:21:40.583744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:49.029 [2024-10-08 09:21:40.583754] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4878c0b-2581-44bf-9e0b-03d2250d2113 00:15:49.029 [2024-10-08 09:21:40.583768] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:49.029 [2024-10-08 09:21:40.583778] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:49.029 [2024-10-08 09:21:40.583786] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:49.029 [2024-10-08 09:21:40.583795] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:49.029 [2024-10-08 09:21:40.583801] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:49.029 [2024-10-08 09:21:40.583811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:49.029 [2024-10-08 09:21:40.583818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:49.029 [2024-10-08 09:21:40.583826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:49.029 [2024-10-08 09:21:40.583833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:49.029 [2024-10-08 09:21:40.583842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.029 [2024-10-08 09:21:40.583849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:49.029 [2024-10-08 09:21:40.583858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:15:49.029 [2024-10-08 09:21:40.583868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.029 [2024-10-08 09:21:40.596869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.029 [2024-10-08 09:21:40.596902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:49.029 [2024-10-08 09:21:40.596915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.953 ms 00:15:49.029 [2024-10-08 09:21:40.596923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.029 [2024-10-08 09:21:40.597309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.029 [2024-10-08 09:21:40.597330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:49.029 [2024-10-08 09:21:40.597344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:15:49.029 [2024-10-08 09:21:40.597351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.029 [2024-10-08 09:21:40.643436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.029 [2024-10-08 09:21:40.643484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:49.029 [2024-10-08 09:21:40.643498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.029 [2024-10-08 09:21:40.643506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.029 [2024-10-08 09:21:40.643586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.029 [2024-10-08 09:21:40.643596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:49.029 [2024-10-08 09:21:40.643609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.029 [2024-10-08 09:21:40.643617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.029 [2024-10-08 09:21:40.643718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.029 [2024-10-08 09:21:40.643729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:49.029 [2024-10-08 09:21:40.643739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.029 [2024-10-08 09:21:40.643747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.029 [2024-10-08 09:21:40.643786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.029 [2024-10-08 09:21:40.643795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:49.029 [2024-10-08 09:21:40.643804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.029 [2024-10-08 09:21:40.643813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.288 [2024-10-08 09:21:40.729968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.288 [2024-10-08 09:21:40.730030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:49.288 [2024-10-08 09:21:40.730045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.288 [2024-10-08 09:21:40.730054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.288 [2024-10-08 09:21:40.796519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.288 [2024-10-08 09:21:40.796580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:49.288 [2024-10-08 09:21:40.796597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.288 [2024-10-08 09:21:40.796605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.288 [2024-10-08 09:21:40.796722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.288 [2024-10-08 09:21:40.796732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:49.288 [2024-10-08 09:21:40.796743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.288 [2024-10-08 09:21:40.796751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.288 [2024-10-08 09:21:40.796815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.288 [2024-10-08 09:21:40.796825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:49.288 [2024-10-08 09:21:40.796835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.288 [2024-10-08 09:21:40.796841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.288 [2024-10-08 09:21:40.796959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.288 [2024-10-08 09:21:40.796969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:49.288 [2024-10-08 09:21:40.796979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.288 [2024-10-08 09:21:40.796987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.288 [2024-10-08 09:21:40.797037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.288 [2024-10-08 09:21:40.797047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:49.288 [2024-10-08 09:21:40.797057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.288 [2024-10-08 09:21:40.797064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.288 [2024-10-08 09:21:40.797115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.288 [2024-10-08 09:21:40.797128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:49.288 [2024-10-08 09:21:40.797137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.288 [2024-10-08 09:21:40.797145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.288 [2024-10-08 09:21:40.797206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:49.288 [2024-10-08 09:21:40.797216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:49.288 [2024-10-08 09:21:40.797226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:49.288 [2024-10-08 09:21:40.797233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.288 [2024-10-08 09:21:40.797429] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 359.222 ms, result 0 00:15:49.288 true 00:15:49.288 09:21:40 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72771 00:15:49.288 09:21:40 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 72771 ']' 00:15:49.288 09:21:40 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 72771 00:15:49.288 09:21:40 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:49.288 09:21:40 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:49.288 09:21:40 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72771 00:15:49.288 killing process with pid 72771 00:15:49.288 09:21:40 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:49.288 09:21:40 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:49.288 09:21:40 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72771' 00:15:49.288 09:21:40 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 72771 00:15:49.288 09:21:40 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 72771 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:54.549 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:54.550 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:54.550 09:21:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:54.807 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:54.807 fio-3.35 00:15:54.807 Starting 1 thread 00:15:58.991 00:15:58.991 test: (groupid=0, jobs=1): err= 0: pid=72956: Tue Oct 8 09:21:50 2024 00:15:58.991 read: IOPS=1295, BW=86.0MiB/s (90.2MB/s)(255MiB/2959msec) 00:15:58.991 slat (nsec): min=4047, max=19809, avg=5707.19, stdev=2117.78 00:15:58.991 clat (usec): min=255, max=981, avg=348.12, stdev=55.86 00:15:58.991 lat (usec): min=260, max=985, avg=353.83, stdev=56.61 00:15:58.991 clat percentiles (usec): 00:15:58.991 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 322], 20.00th=[ 326], 00:15:58.991 | 30.00th=[ 330], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 334], 00:15:58.991 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 392], 95.00th=[ 453], 00:15:58.991 | 99.00th=[ 611], 99.50th=[ 668], 99.90th=[ 840], 99.95th=[ 947], 00:15:58.991 | 99.99th=[ 979] 00:15:58.991 write: IOPS=1304, BW=86.6MiB/s (90.8MB/s)(256MiB/2956msec); 0 zone resets 00:15:58.991 slat (nsec): min=14879, max=69315, avg=21519.74, stdev=4316.59 00:15:58.991 clat (usec): min=307, max=1100, avg=379.84, stdev=72.57 00:15:58.991 lat (usec): min=327, max=1134, avg=401.36, stdev=72.88 00:15:58.991 clat percentiles (usec): 00:15:58.991 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 347], 20.00th=[ 351], 00:15:58.991 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 359], 60.00th=[ 363], 00:15:58.991 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 429], 95.00th=[ 506], 00:15:58.991 | 99.00th=[ 717], 99.50th=[ 766], 99.90th=[ 1029], 99.95th=[ 1057], 00:15:58.991 | 99.99th=[ 1106] 00:15:58.991 bw ( KiB/s): min=87584, max=89760, per=100.00%, avg=88862.40, stdev=950.06, samples=5 00:15:58.991 iops : min= 1288, max= 1320, avg=1306.80, stdev=13.97, samples=5 00:15:58.991 lat (usec) : 500=95.70%, 750=3.94%, 1000=0.31% 00:15:58.991 lat (msec) : 2=0.05% 00:15:58.991 cpu : usr=99.22%, sys=0.03%, ctx=5, majf=0, minf=1169 00:15:58.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:58.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.991 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:58.991 00:15:58.991 Run status group 0 (all jobs): 00:15:58.991 READ: bw=86.0MiB/s (90.2MB/s), 86.0MiB/s-86.0MiB/s (90.2MB/s-90.2MB/s), io=255MiB (267MB), run=2959-2959msec 00:15:58.991 WRITE: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=256MiB (269MB), run=2956-2956msec 00:16:00.366 ----------------------------------------------------- 00:16:00.366 Suppressions used: 00:16:00.366 count bytes template 00:16:00.366 1 5 /usr/src/fio/parse.c 00:16:00.366 1 8 libtcmalloc_minimal.so 00:16:00.366 1 904 libcrypto.so 00:16:00.366 ----------------------------------------------------- 00:16:00.366 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:00.366 09:21:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:00.625 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:00.625 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:00.625 fio-3.35 00:16:00.625 Starting 2 threads 00:16:27.161 00:16:27.161 first_half: (groupid=0, jobs=1): err= 0: pid=73048: Tue Oct 8 09:22:16 2024 00:16:27.161 read: IOPS=2864, BW=11.2MiB/s (11.7MB/s)(255MiB/22757msec) 00:16:27.161 slat (nsec): min=3134, max=75311, avg=5233.94, stdev=1282.23 00:16:27.161 clat (usec): min=589, max=286579, avg=33869.22, stdev=15092.69 00:16:27.161 lat (usec): min=595, max=286585, avg=33874.46, stdev=15092.73 00:16:27.161 clat percentiles (msec): 00:16:27.161 | 1.00th=[ 5], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 32], 00:16:27.161 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:16:27.161 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 39], 95.00th=[ 42], 00:16:27.161 | 99.00th=[ 109], 99.50th=[ 140], 99.90th=[ 226], 99.95th=[ 249], 00:16:27.161 | 99.99th=[ 279] 00:16:27.161 write: IOPS=4375, BW=17.1MiB/s (17.9MB/s)(256MiB/14978msec); 0 zone resets 00:16:27.161 slat (usec): min=4, max=506, avg= 6.90, stdev= 3.44 00:16:27.161 clat (usec): min=422, max=89350, avg=10737.28, stdev=18559.12 00:16:27.161 lat (usec): min=437, max=89357, avg=10744.18, stdev=18559.16 00:16:27.162 clat percentiles (usec): 00:16:27.162 | 1.00th=[ 652], 5.00th=[ 766], 10.00th=[ 857], 20.00th=[ 1057], 00:16:27.162 | 30.00th=[ 1254], 40.00th=[ 1582], 50.00th=[ 2966], 60.00th=[ 4817], 00:16:27.162 | 70.00th=[ 6521], 80.00th=[11994], 90.00th=[48497], 95.00th=[62129], 00:16:27.162 | 99.00th=[73925], 99.50th=[76022], 99.90th=[83362], 99.95th=[86508], 00:16:27.162 | 99.99th=[88605] 00:16:27.162 bw ( KiB/s): min= 136, max=54880, per=84.84%, avg=26214.40, stdev=15317.26, samples=20 00:16:27.162 iops : min= 34, max=13720, avg=6553.60, stdev=3829.31, samples=20 00:16:27.162 lat (usec) : 500=0.01%, 750=2.18%, 1000=6.66% 00:16:27.162 lat (msec) : 2=13.69%, 4=5.92%, 10=10.02%, 20=6.29%, 50=48.63% 00:16:27.162 lat (msec) : 100=6.03%, 250=0.56%, 500=0.02% 00:16:27.162 cpu : usr=99.17%, sys=0.11%, ctx=88, majf=0, minf=5565 00:16:27.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:27.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.162 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:27.162 issued rwts: total=65193,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:27.162 second_half: (groupid=0, jobs=1): err= 0: pid=73049: Tue Oct 8 09:22:16 2024 00:16:27.162 read: IOPS=2841, BW=11.1MiB/s (11.6MB/s)(255MiB/22936msec) 00:16:27.162 slat (nsec): min=3083, max=66934, avg=5052.92, stdev=949.89 00:16:27.162 clat (usec): min=560, max=292293, avg=33136.70, stdev=14999.07 00:16:27.162 lat (usec): min=567, max=292297, avg=33141.75, stdev=14999.11 00:16:27.162 clat percentiles (msec): 00:16:27.162 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 29], 20.00th=[ 32], 00:16:27.162 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:16:27.162 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 42], 00:16:27.162 | 99.00th=[ 112], 99.50th=[ 138], 99.90th=[ 171], 99.95th=[ 190], 00:16:27.162 | 99.99th=[ 288] 00:16:27.162 write: IOPS=3862, BW=15.1MiB/s (15.8MB/s)(256MiB/16968msec); 0 zone resets 00:16:27.162 slat (usec): min=3, max=320, avg= 6.71, stdev= 2.84 00:16:27.162 clat (usec): min=360, max=89518, avg=11816.21, stdev=18941.41 00:16:27.162 lat (usec): min=371, max=89527, avg=11822.92, stdev=18941.48 00:16:27.162 clat percentiles (usec): 00:16:27.162 | 1.00th=[ 586], 5.00th=[ 676], 10.00th=[ 766], 20.00th=[ 947], 00:16:27.162 | 30.00th=[ 1123], 40.00th=[ 1467], 50.00th=[ 3228], 60.00th=[ 5342], 00:16:27.162 | 70.00th=[10552], 80.00th=[13960], 90.00th=[48497], 95.00th=[62653], 00:16:27.162 | 99.00th=[74974], 99.50th=[78119], 99.90th=[85459], 99.95th=[88605], 00:16:27.162 | 99.99th=[89654] 00:16:27.162 bw ( KiB/s): min= 1400, max=44528, per=73.77%, avg=22795.13, stdev=12996.46, samples=23 00:16:27.162 iops : min= 350, max=11132, avg=5698.78, stdev=3249.12, samples=23 00:16:27.162 lat (usec) : 500=0.01%, 750=4.61%, 1000=7.09% 00:16:27.162 lat (msec) : 2=9.90%, 4=5.77%, 10=9.23%, 20=7.77%, 50=49.18% 00:16:27.162 lat (msec) : 100=5.70%, 250=0.74%, 500=0.01% 00:16:27.162 cpu : usr=99.36%, sys=0.14%, ctx=30, majf=0, minf=5540 00:16:27.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:27.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.162 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:27.162 issued rwts: total=65176,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:27.162 00:16:27.162 Run status group 0 (all jobs): 00:16:27.162 READ: bw=22.2MiB/s (23.3MB/s), 11.1MiB/s-11.2MiB/s (11.6MB/s-11.7MB/s), io=509MiB (534MB), run=22757-22936msec 00:16:27.162 WRITE: bw=30.2MiB/s (31.6MB/s), 15.1MiB/s-17.1MiB/s (15.8MB/s-17.9MB/s), io=512MiB (537MB), run=14978-16968msec 00:16:27.162 ----------------------------------------------------- 00:16:27.162 Suppressions used: 00:16:27.162 count bytes template 00:16:27.162 2 10 /usr/src/fio/parse.c 00:16:27.162 2 192 /usr/src/fio/iolog.c 00:16:27.162 1 8 libtcmalloc_minimal.so 00:16:27.162 1 904 libcrypto.so 00:16:27.162 ----------------------------------------------------- 00:16:27.162 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:27.162 09:22:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:27.162 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:27.162 fio-3.35 00:16:27.162 Starting 1 thread 00:16:39.363 00:16:39.364 test: (groupid=0, jobs=1): err= 0: pid=73351: Tue Oct 8 09:22:31 2024 00:16:39.364 read: IOPS=7962, BW=31.1MiB/s (32.6MB/s)(255MiB/8189msec) 00:16:39.364 slat (usec): min=3, max=141, avg= 4.64, stdev= 1.18 00:16:39.364 clat (usec): min=538, max=31576, avg=16066.51, stdev=1884.11 00:16:39.364 lat (usec): min=542, max=31581, avg=16071.15, stdev=1884.13 00:16:39.364 clat percentiles (usec): 00:16:39.364 | 1.00th=[13960], 5.00th=[14222], 10.00th=[14353], 20.00th=[14615], 00:16:39.364 | 30.00th=[15664], 40.00th=[15795], 50.00th=[16057], 60.00th=[16188], 00:16:39.364 | 70.00th=[16319], 80.00th=[16450], 90.00th=[16712], 95.00th=[19006], 00:16:39.364 | 99.00th=[24511], 99.50th=[25297], 99.90th=[28705], 99.95th=[30802], 00:16:39.364 | 99.99th=[31327] 00:16:39.364 write: IOPS=16.7k, BW=65.3MiB/s (68.4MB/s)(256MiB/3923msec); 0 zone resets 00:16:39.364 slat (usec): min=4, max=141, avg= 7.12, stdev= 2.64 00:16:39.364 clat (usec): min=493, max=43943, avg=7618.37, stdev=9549.30 00:16:39.364 lat (usec): min=499, max=43950, avg=7625.49, stdev=9549.28 00:16:39.364 clat percentiles (usec): 00:16:39.364 | 1.00th=[ 603], 5.00th=[ 685], 10.00th=[ 742], 20.00th=[ 840], 00:16:39.364 | 30.00th=[ 988], 40.00th=[ 1385], 50.00th=[ 5145], 60.00th=[ 5866], 00:16:39.364 | 70.00th=[ 6849], 80.00th=[ 8586], 90.00th=[27132], 95.00th=[28967], 00:16:39.364 | 99.00th=[34866], 99.50th=[38011], 99.90th=[41157], 99.95th=[41681], 00:16:39.364 | 99.99th=[43254] 00:16:39.364 bw ( KiB/s): min=51944, max=87480, per=98.05%, avg=65519.63, stdev=12152.62, samples=8 00:16:39.364 iops : min=12986, max=21870, avg=16379.87, stdev=3038.16, samples=8 00:16:39.364 lat (usec) : 500=0.01%, 750=5.59%, 1000=9.93% 00:16:39.364 lat (msec) : 2=5.02%, 4=0.68%, 10=20.70%, 20=48.02%, 50=10.06% 00:16:39.364 cpu : usr=99.07%, sys=0.24%, ctx=19, majf=0, minf=5565 00:16:39.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:39.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.364 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:39.364 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:39.364 00:16:39.364 Run status group 0 (all jobs): 00:16:39.364 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=255MiB (267MB), run=8189-8189msec 00:16:39.364 WRITE: bw=65.3MiB/s (68.4MB/s), 65.3MiB/s-65.3MiB/s (68.4MB/s-68.4MB/s), io=256MiB (268MB), run=3923-3923msec 00:16:41.265 ----------------------------------------------------- 00:16:41.265 Suppressions used: 00:16:41.265 count bytes template 00:16:41.265 1 5 /usr/src/fio/parse.c 00:16:41.265 2 192 /usr/src/fio/iolog.c 00:16:41.265 1 8 libtcmalloc_minimal.so 00:16:41.265 1 904 libcrypto.so 00:16:41.265 ----------------------------------------------------- 00:16:41.265 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:41.265 Remove shared memory files 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57469 /dev/shm/spdk_tgt_trace.pid71687 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:41.265 00:16:41.265 real 0m59.564s 00:16:41.265 user 2m11.789s 00:16:41.265 sys 0m2.735s 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:41.265 09:22:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:41.265 ************************************ 00:16:41.265 END TEST ftl_fio_basic 00:16:41.265 ************************************ 00:16:41.265 09:22:32 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:41.265 09:22:32 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:41.265 09:22:32 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:41.265 09:22:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:41.265 ************************************ 00:16:41.265 START TEST ftl_bdevperf 00:16:41.265 ************************************ 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:41.265 * Looking for test storage... 00:16:41.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:41.265 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:41.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.266 --rc genhtml_branch_coverage=1 00:16:41.266 --rc genhtml_function_coverage=1 00:16:41.266 --rc genhtml_legend=1 00:16:41.266 --rc geninfo_all_blocks=1 00:16:41.266 --rc geninfo_unexecuted_blocks=1 00:16:41.266 00:16:41.266 ' 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:41.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.266 --rc genhtml_branch_coverage=1 00:16:41.266 --rc genhtml_function_coverage=1 00:16:41.266 --rc genhtml_legend=1 00:16:41.266 --rc geninfo_all_blocks=1 00:16:41.266 --rc geninfo_unexecuted_blocks=1 00:16:41.266 00:16:41.266 ' 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:41.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.266 --rc genhtml_branch_coverage=1 00:16:41.266 --rc genhtml_function_coverage=1 00:16:41.266 --rc genhtml_legend=1 00:16:41.266 --rc geninfo_all_blocks=1 00:16:41.266 --rc geninfo_unexecuted_blocks=1 00:16:41.266 00:16:41.266 ' 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:41.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.266 --rc genhtml_branch_coverage=1 00:16:41.266 --rc genhtml_function_coverage=1 00:16:41.266 --rc genhtml_legend=1 00:16:41.266 --rc geninfo_all_blocks=1 00:16:41.266 --rc geninfo_unexecuted_blocks=1 00:16:41.266 00:16:41.266 ' 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73574 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73574 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 73574 ']' 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:41.266 09:22:32 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:41.266 [2024-10-08 09:22:32.800155] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:16:41.266 [2024-10-08 09:22:32.800447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73574 ] 00:16:41.266 [2024-10-08 09:22:32.945492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.525 [2024-10-08 09:22:33.130927] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.092 09:22:33 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:42.092 09:22:33 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:42.092 09:22:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:42.092 09:22:33 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:42.092 09:22:33 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:42.092 09:22:33 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:42.092 09:22:33 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:42.092 09:22:33 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:42.351 09:22:33 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:42.351 09:22:33 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:42.351 09:22:33 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:42.351 09:22:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:42.351 09:22:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:42.351 09:22:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:42.351 09:22:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:42.351 09:22:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:42.609 09:22:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:42.610 { 00:16:42.610 "name": "nvme0n1", 00:16:42.610 "aliases": [ 00:16:42.610 "aac87a25-f04c-4362-afc1-5225a85f8317" 00:16:42.610 ], 00:16:42.610 "product_name": "NVMe disk", 00:16:42.610 "block_size": 4096, 00:16:42.610 "num_blocks": 1310720, 00:16:42.610 "uuid": "aac87a25-f04c-4362-afc1-5225a85f8317", 00:16:42.610 "numa_id": -1, 00:16:42.610 "assigned_rate_limits": { 00:16:42.610 "rw_ios_per_sec": 0, 00:16:42.610 "rw_mbytes_per_sec": 0, 00:16:42.610 "r_mbytes_per_sec": 0, 00:16:42.610 "w_mbytes_per_sec": 0 00:16:42.610 }, 00:16:42.610 "claimed": true, 00:16:42.610 "claim_type": "read_many_write_one", 00:16:42.610 "zoned": false, 00:16:42.610 "supported_io_types": { 00:16:42.610 "read": true, 00:16:42.610 "write": true, 00:16:42.610 "unmap": true, 00:16:42.610 "flush": true, 00:16:42.610 "reset": true, 00:16:42.610 "nvme_admin": true, 00:16:42.610 "nvme_io": true, 00:16:42.610 "nvme_io_md": false, 00:16:42.610 "write_zeroes": true, 00:16:42.610 "zcopy": false, 00:16:42.610 "get_zone_info": false, 00:16:42.610 "zone_management": false, 00:16:42.610 "zone_append": false, 00:16:42.610 "compare": true, 00:16:42.610 "compare_and_write": false, 00:16:42.610 "abort": true, 00:16:42.610 "seek_hole": false, 00:16:42.610 "seek_data": false, 00:16:42.610 "copy": true, 00:16:42.610 "nvme_iov_md": false 00:16:42.610 }, 00:16:42.610 "driver_specific": { 00:16:42.610 "nvme": [ 00:16:42.610 { 00:16:42.610 "pci_address": "0000:00:11.0", 00:16:42.610 "trid": { 00:16:42.610 "trtype": "PCIe", 00:16:42.610 "traddr": "0000:00:11.0" 00:16:42.610 }, 00:16:42.610 "ctrlr_data": { 00:16:42.610 "cntlid": 0, 00:16:42.610 "vendor_id": "0x1b36", 00:16:42.610 "model_number": "QEMU NVMe Ctrl", 00:16:42.610 "serial_number": "12341", 00:16:42.610 "firmware_revision": "8.0.0", 00:16:42.610 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:42.610 "oacs": { 00:16:42.610 "security": 0, 00:16:42.610 "format": 1, 00:16:42.610 "firmware": 0, 00:16:42.610 "ns_manage": 1 00:16:42.610 }, 00:16:42.610 "multi_ctrlr": false, 00:16:42.610 "ana_reporting": false 00:16:42.610 }, 00:16:42.610 "vs": { 00:16:42.610 "nvme_version": "1.4" 00:16:42.610 }, 00:16:42.610 "ns_data": { 00:16:42.610 "id": 1, 00:16:42.610 "can_share": false 00:16:42.610 } 00:16:42.610 } 00:16:42.610 ], 00:16:42.610 "mp_policy": "active_passive" 00:16:42.610 } 00:16:42.610 } 00:16:42.610 ]' 00:16:42.610 09:22:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:42.610 09:22:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:42.610 09:22:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:42.610 09:22:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:42.610 09:22:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:42.610 09:22:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:42.610 09:22:34 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:42.610 09:22:34 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:42.610 09:22:34 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:42.610 09:22:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:42.610 09:22:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:42.869 09:22:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=9b5dfcde-cb3f-44b0-a098-d45a54be0312 00:16:42.869 09:22:34 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:42.869 09:22:34 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b5dfcde-cb3f-44b0-a098-d45a54be0312 00:16:43.127 09:22:34 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:43.386 09:22:34 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f7d57126-a47d-44e7-9309-7bd229373a26 00:16:43.386 09:22:34 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f7d57126-a47d-44e7-9309-7bd229373a26 00:16:43.386 09:22:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=322cd157-5f14-4cb4-a5e3-366d479bdc1a 00:16:43.386 09:22:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 322cd157-5f14-4cb4-a5e3-366d479bdc1a 00:16:43.386 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:43.386 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:43.386 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=322cd157-5f14-4cb4-a5e3-366d479bdc1a 00:16:43.386 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:43.386 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 322cd157-5f14-4cb4-a5e3-366d479bdc1a 00:16:43.386 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=322cd157-5f14-4cb4-a5e3-366d479bdc1a 00:16:43.386 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:43.386 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:43.386 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:43.386 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 322cd157-5f14-4cb4-a5e3-366d479bdc1a 00:16:43.645 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:43.645 { 00:16:43.645 "name": "322cd157-5f14-4cb4-a5e3-366d479bdc1a", 00:16:43.645 "aliases": [ 00:16:43.645 "lvs/nvme0n1p0" 00:16:43.645 ], 00:16:43.645 "product_name": "Logical Volume", 00:16:43.645 "block_size": 4096, 00:16:43.645 "num_blocks": 26476544, 00:16:43.645 "uuid": "322cd157-5f14-4cb4-a5e3-366d479bdc1a", 00:16:43.645 "assigned_rate_limits": { 00:16:43.645 "rw_ios_per_sec": 0, 00:16:43.645 "rw_mbytes_per_sec": 0, 00:16:43.645 "r_mbytes_per_sec": 0, 00:16:43.645 "w_mbytes_per_sec": 0 00:16:43.645 }, 00:16:43.645 "claimed": false, 00:16:43.645 "zoned": false, 00:16:43.645 "supported_io_types": { 00:16:43.645 "read": true, 00:16:43.645 "write": true, 00:16:43.645 "unmap": true, 00:16:43.645 "flush": false, 00:16:43.645 "reset": true, 00:16:43.645 "nvme_admin": false, 00:16:43.645 "nvme_io": false, 00:16:43.645 "nvme_io_md": false, 00:16:43.645 "write_zeroes": true, 00:16:43.645 "zcopy": false, 00:16:43.645 "get_zone_info": false, 00:16:43.645 "zone_management": false, 00:16:43.645 "zone_append": false, 00:16:43.645 "compare": false, 00:16:43.645 "compare_and_write": false, 00:16:43.645 "abort": false, 00:16:43.645 "seek_hole": true, 00:16:43.645 "seek_data": true, 00:16:43.645 "copy": false, 00:16:43.645 "nvme_iov_md": false 00:16:43.645 }, 00:16:43.645 "driver_specific": { 00:16:43.645 "lvol": { 00:16:43.645 "lvol_store_uuid": "f7d57126-a47d-44e7-9309-7bd229373a26", 00:16:43.645 "base_bdev": "nvme0n1", 00:16:43.645 "thin_provision": true, 00:16:43.645 "num_allocated_clusters": 0, 00:16:43.645 "snapshot": false, 00:16:43.645 "clone": false, 00:16:43.645 "esnap_clone": false 00:16:43.645 } 00:16:43.645 } 00:16:43.645 } 00:16:43.645 ]' 00:16:43.645 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:43.645 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:43.645 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:43.645 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:43.645 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:43.645 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:43.645 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:43.645 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:43.645 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:43.904 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:43.904 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:43.904 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 322cd157-5f14-4cb4-a5e3-366d479bdc1a 00:16:43.904 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=322cd157-5f14-4cb4-a5e3-366d479bdc1a 00:16:43.904 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:43.904 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:43.904 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:43.904 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 322cd157-5f14-4cb4-a5e3-366d479bdc1a 00:16:44.163 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:44.163 { 00:16:44.163 "name": "322cd157-5f14-4cb4-a5e3-366d479bdc1a", 00:16:44.163 "aliases": [ 00:16:44.163 "lvs/nvme0n1p0" 00:16:44.163 ], 00:16:44.163 "product_name": "Logical Volume", 00:16:44.163 "block_size": 4096, 00:16:44.163 "num_blocks": 26476544, 00:16:44.163 "uuid": "322cd157-5f14-4cb4-a5e3-366d479bdc1a", 00:16:44.163 "assigned_rate_limits": { 00:16:44.163 "rw_ios_per_sec": 0, 00:16:44.163 "rw_mbytes_per_sec": 0, 00:16:44.163 "r_mbytes_per_sec": 0, 00:16:44.163 "w_mbytes_per_sec": 0 00:16:44.163 }, 00:16:44.163 "claimed": false, 00:16:44.163 "zoned": false, 00:16:44.163 "supported_io_types": { 00:16:44.163 "read": true, 00:16:44.163 "write": true, 00:16:44.163 "unmap": true, 00:16:44.163 "flush": false, 00:16:44.163 "reset": true, 00:16:44.163 "nvme_admin": false, 00:16:44.163 "nvme_io": false, 00:16:44.163 "nvme_io_md": false, 00:16:44.163 "write_zeroes": true, 00:16:44.163 "zcopy": false, 00:16:44.163 "get_zone_info": false, 00:16:44.163 "zone_management": false, 00:16:44.163 "zone_append": false, 00:16:44.163 "compare": false, 00:16:44.163 "compare_and_write": false, 00:16:44.163 "abort": false, 00:16:44.163 "seek_hole": true, 00:16:44.163 "seek_data": true, 00:16:44.163 "copy": false, 00:16:44.163 "nvme_iov_md": false 00:16:44.163 }, 00:16:44.163 "driver_specific": { 00:16:44.163 "lvol": { 00:16:44.163 "lvol_store_uuid": "f7d57126-a47d-44e7-9309-7bd229373a26", 00:16:44.163 "base_bdev": "nvme0n1", 00:16:44.163 "thin_provision": true, 00:16:44.163 "num_allocated_clusters": 0, 00:16:44.163 "snapshot": false, 00:16:44.163 "clone": false, 00:16:44.163 "esnap_clone": false 00:16:44.163 } 00:16:44.163 } 00:16:44.163 } 00:16:44.163 ]' 00:16:44.163 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:44.163 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:44.163 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:44.163 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:44.163 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:44.163 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:44.163 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:44.163 09:22:35 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:44.422 09:22:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:44.422 09:22:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 322cd157-5f14-4cb4-a5e3-366d479bdc1a 00:16:44.422 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=322cd157-5f14-4cb4-a5e3-366d479bdc1a 00:16:44.422 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:44.422 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:44.422 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:44.422 09:22:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 322cd157-5f14-4cb4-a5e3-366d479bdc1a 00:16:44.422 09:22:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:44.422 { 00:16:44.422 "name": "322cd157-5f14-4cb4-a5e3-366d479bdc1a", 00:16:44.422 "aliases": [ 00:16:44.422 "lvs/nvme0n1p0" 00:16:44.422 ], 00:16:44.422 "product_name": "Logical Volume", 00:16:44.422 "block_size": 4096, 00:16:44.422 "num_blocks": 26476544, 00:16:44.422 "uuid": "322cd157-5f14-4cb4-a5e3-366d479bdc1a", 00:16:44.422 "assigned_rate_limits": { 00:16:44.422 "rw_ios_per_sec": 0, 00:16:44.422 "rw_mbytes_per_sec": 0, 00:16:44.422 "r_mbytes_per_sec": 0, 00:16:44.422 "w_mbytes_per_sec": 0 00:16:44.422 }, 00:16:44.422 "claimed": false, 00:16:44.422 "zoned": false, 00:16:44.422 "supported_io_types": { 00:16:44.422 "read": true, 00:16:44.422 "write": true, 00:16:44.422 "unmap": true, 00:16:44.422 "flush": false, 00:16:44.422 "reset": true, 00:16:44.422 "nvme_admin": false, 00:16:44.422 "nvme_io": false, 00:16:44.422 "nvme_io_md": false, 00:16:44.422 "write_zeroes": true, 00:16:44.422 "zcopy": false, 00:16:44.422 "get_zone_info": false, 00:16:44.422 "zone_management": false, 00:16:44.422 "zone_append": false, 00:16:44.422 "compare": false, 00:16:44.422 "compare_and_write": false, 00:16:44.422 "abort": false, 00:16:44.422 "seek_hole": true, 00:16:44.422 "seek_data": true, 00:16:44.422 "copy": false, 00:16:44.422 "nvme_iov_md": false 00:16:44.422 }, 00:16:44.422 "driver_specific": { 00:16:44.422 "lvol": { 00:16:44.422 "lvol_store_uuid": "f7d57126-a47d-44e7-9309-7bd229373a26", 00:16:44.422 "base_bdev": "nvme0n1", 00:16:44.422 "thin_provision": true, 00:16:44.422 "num_allocated_clusters": 0, 00:16:44.422 "snapshot": false, 00:16:44.422 "clone": false, 00:16:44.422 "esnap_clone": false 00:16:44.422 } 00:16:44.422 } 00:16:44.422 } 00:16:44.422 ]' 00:16:44.422 09:22:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:44.681 09:22:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:44.681 09:22:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:44.681 09:22:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:44.681 09:22:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:44.681 09:22:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:44.682 09:22:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:44.682 09:22:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 322cd157-5f14-4cb4-a5e3-366d479bdc1a -c nvc0n1p0 --l2p_dram_limit 20 00:16:44.682 [2024-10-08 09:22:36.328745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.682 [2024-10-08 09:22:36.328968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:44.682 [2024-10-08 09:22:36.329018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:44.682 [2024-10-08 09:22:36.329040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.682 [2024-10-08 09:22:36.329114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.682 [2024-10-08 09:22:36.329135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.682 [2024-10-08 09:22:36.329151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:44.682 [2024-10-08 09:22:36.329160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.682 [2024-10-08 09:22:36.329176] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:44.682 [2024-10-08 09:22:36.329817] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:44.682 [2024-10-08 09:22:36.329831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.682 [2024-10-08 09:22:36.329839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.682 [2024-10-08 09:22:36.329847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:16:44.682 [2024-10-08 09:22:36.329855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.682 [2024-10-08 09:22:36.329909] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 49f8589c-85ce-4515-b456-02acb6678ecb 00:16:44.682 [2024-10-08 09:22:36.331248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.682 [2024-10-08 09:22:36.331360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:44.682 [2024-10-08 09:22:36.331378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:44.682 [2024-10-08 09:22:36.331385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.682 [2024-10-08 09:22:36.338287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.682 [2024-10-08 09:22:36.338408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.682 [2024-10-08 09:22:36.338424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.847 ms 00:16:44.682 [2024-10-08 09:22:36.338432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.682 [2024-10-08 09:22:36.338521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.682 [2024-10-08 09:22:36.338529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.682 [2024-10-08 09:22:36.338541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:44.682 [2024-10-08 09:22:36.338547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.682 [2024-10-08 09:22:36.338595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.682 [2024-10-08 09:22:36.338603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:44.682 [2024-10-08 09:22:36.338613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:44.682 [2024-10-08 09:22:36.338619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.682 [2024-10-08 09:22:36.338638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:44.682 [2024-10-08 09:22:36.342006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.682 [2024-10-08 09:22:36.342109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.682 [2024-10-08 09:22:36.342121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.376 ms 00:16:44.682 [2024-10-08 09:22:36.342129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.682 [2024-10-08 09:22:36.342157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.682 [2024-10-08 09:22:36.342165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:44.682 [2024-10-08 09:22:36.342172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:44.682 [2024-10-08 09:22:36.342179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.682 [2024-10-08 09:22:36.342199] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:44.682 [2024-10-08 09:22:36.342318] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:44.682 [2024-10-08 09:22:36.342328] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:44.682 [2024-10-08 09:22:36.342338] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:44.682 [2024-10-08 09:22:36.342347] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:44.682 [2024-10-08 09:22:36.342356] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:44.682 [2024-10-08 09:22:36.342362] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:44.682 [2024-10-08 09:22:36.342372] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:44.682 [2024-10-08 09:22:36.342378] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:44.682 [2024-10-08 09:22:36.342385] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:44.682 [2024-10-08 09:22:36.342408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.682 [2024-10-08 09:22:36.342415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:44.682 [2024-10-08 09:22:36.342422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:16:44.682 [2024-10-08 09:22:36.342430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.682 [2024-10-08 09:22:36.342495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.682 [2024-10-08 09:22:36.342505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:44.682 [2024-10-08 09:22:36.342511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:44.682 [2024-10-08 09:22:36.342520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.682 [2024-10-08 09:22:36.342604] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:44.682 [2024-10-08 09:22:36.342614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:44.682 [2024-10-08 09:22:36.342620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.682 [2024-10-08 09:22:36.342629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:44.682 [2024-10-08 09:22:36.342642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:44.682 [2024-10-08 09:22:36.342655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:44.682 [2024-10-08 09:22:36.342661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.682 [2024-10-08 09:22:36.342672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:44.682 [2024-10-08 09:22:36.342685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:44.682 [2024-10-08 09:22:36.342691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.682 [2024-10-08 09:22:36.342698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:44.682 [2024-10-08 09:22:36.342704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:44.682 [2024-10-08 09:22:36.342712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:44.682 [2024-10-08 09:22:36.342724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:44.682 [2024-10-08 09:22:36.342729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:44.682 [2024-10-08 09:22:36.342742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.682 [2024-10-08 09:22:36.342755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:44.682 [2024-10-08 09:22:36.342762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.682 [2024-10-08 09:22:36.342774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:44.682 [2024-10-08 09:22:36.342782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.682 [2024-10-08 09:22:36.342795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:44.682 [2024-10-08 09:22:36.342801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.682 [2024-10-08 09:22:36.342815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:44.682 [2024-10-08 09:22:36.342820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.682 [2024-10-08 09:22:36.342831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:44.682 [2024-10-08 09:22:36.342837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:44.682 [2024-10-08 09:22:36.342843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.682 [2024-10-08 09:22:36.342849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:44.682 [2024-10-08 09:22:36.342854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:44.682 [2024-10-08 09:22:36.342861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:44.682 [2024-10-08 09:22:36.342873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:44.682 [2024-10-08 09:22:36.342878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342884] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:44.682 [2024-10-08 09:22:36.342891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:44.682 [2024-10-08 09:22:36.342898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.682 [2024-10-08 09:22:36.342903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.682 [2024-10-08 09:22:36.342914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:44.682 [2024-10-08 09:22:36.342919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:44.682 [2024-10-08 09:22:36.342925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:44.683 [2024-10-08 09:22:36.342931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:44.683 [2024-10-08 09:22:36.342938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:44.683 [2024-10-08 09:22:36.342943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:44.683 [2024-10-08 09:22:36.342953] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:44.683 [2024-10-08 09:22:36.342963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.683 [2024-10-08 09:22:36.342971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:44.683 [2024-10-08 09:22:36.342976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:44.683 [2024-10-08 09:22:36.342984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:44.683 [2024-10-08 09:22:36.342991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:44.683 [2024-10-08 09:22:36.342998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:44.683 [2024-10-08 09:22:36.343003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:44.683 [2024-10-08 09:22:36.343010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:44.683 [2024-10-08 09:22:36.343016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:44.683 [2024-10-08 09:22:36.343025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:44.683 [2024-10-08 09:22:36.343030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:44.683 [2024-10-08 09:22:36.343037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:44.683 [2024-10-08 09:22:36.343043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:44.683 [2024-10-08 09:22:36.343050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:44.683 [2024-10-08 09:22:36.343056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:44.683 [2024-10-08 09:22:36.343063] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:44.683 [2024-10-08 09:22:36.343070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.683 [2024-10-08 09:22:36.343078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:44.683 [2024-10-08 09:22:36.343084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:44.683 [2024-10-08 09:22:36.343091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:44.683 [2024-10-08 09:22:36.343099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:44.683 [2024-10-08 09:22:36.343106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.683 [2024-10-08 09:22:36.343112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:44.683 [2024-10-08 09:22:36.343119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:16:44.683 [2024-10-08 09:22:36.343125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.683 [2024-10-08 09:22:36.343153] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:44.683 [2024-10-08 09:22:36.343161] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:47.214 [2024-10-08 09:22:38.440504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.440587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:47.214 [2024-10-08 09:22:38.440607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2097.338 ms 00:16:47.214 [2024-10-08 09:22:38.440616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.478130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.478196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:47.214 [2024-10-08 09:22:38.478214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.276 ms 00:16:47.214 [2024-10-08 09:22:38.478224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.478432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.478461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:47.214 [2024-10-08 09:22:38.478474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:47.214 [2024-10-08 09:22:38.478487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.511262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.511318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:47.214 [2024-10-08 09:22:38.511348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.734 ms 00:16:47.214 [2024-10-08 09:22:38.511361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.511428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.511438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:47.214 [2024-10-08 09:22:38.511449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:47.214 [2024-10-08 09:22:38.511459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.511941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.511964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:47.214 [2024-10-08 09:22:38.511976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:16:47.214 [2024-10-08 09:22:38.511983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.512124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.512135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:47.214 [2024-10-08 09:22:38.512148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:16:47.214 [2024-10-08 09:22:38.512156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.525591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.525804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:47.214 [2024-10-08 09:22:38.525826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.416 ms 00:16:47.214 [2024-10-08 09:22:38.525835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.537834] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:47.214 [2024-10-08 09:22:38.543908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.543954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:47.214 [2024-10-08 09:22:38.543967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.973 ms 00:16:47.214 [2024-10-08 09:22:38.543977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.603127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.603213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:47.214 [2024-10-08 09:22:38.603230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.109 ms 00:16:47.214 [2024-10-08 09:22:38.603242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.603466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.603484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:47.214 [2024-10-08 09:22:38.603494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:16:47.214 [2024-10-08 09:22:38.603504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.627550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.627607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:47.214 [2024-10-08 09:22:38.627621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.996 ms 00:16:47.214 [2024-10-08 09:22:38.627631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.650527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.650581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:47.214 [2024-10-08 09:22:38.650595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.848 ms 00:16:47.214 [2024-10-08 09:22:38.650605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.651184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.651204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:47.214 [2024-10-08 09:22:38.651213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:16:47.214 [2024-10-08 09:22:38.651225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.721840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.721914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:47.214 [2024-10-08 09:22:38.721929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.578 ms 00:16:47.214 [2024-10-08 09:22:38.721940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.748420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.748695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:47.214 [2024-10-08 09:22:38.748718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.372 ms 00:16:47.214 [2024-10-08 09:22:38.748730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.773943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.214 [2024-10-08 09:22:38.774012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:47.214 [2024-10-08 09:22:38.774027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.125 ms 00:16:47.214 [2024-10-08 09:22:38.774037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.214 [2024-10-08 09:22:38.797953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.215 [2024-10-08 09:22:38.798029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:47.215 [2024-10-08 09:22:38.798044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.868 ms 00:16:47.215 [2024-10-08 09:22:38.798055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.215 [2024-10-08 09:22:38.798103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.215 [2024-10-08 09:22:38.798120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:47.215 [2024-10-08 09:22:38.798130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:47.215 [2024-10-08 09:22:38.798139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.215 [2024-10-08 09:22:38.798232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.215 [2024-10-08 09:22:38.798248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:47.215 [2024-10-08 09:22:38.798257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:47.215 [2024-10-08 09:22:38.798266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.215 [2024-10-08 09:22:38.799342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2470.110 ms, result 0 00:16:47.215 { 00:16:47.215 "name": "ftl0", 00:16:47.215 "uuid": "49f8589c-85ce-4515-b456-02acb6678ecb" 00:16:47.215 } 00:16:47.215 09:22:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:47.215 09:22:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:47.215 09:22:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:47.491 09:22:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:47.491 [2024-10-08 09:22:39.115546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:47.491 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:47.491 Zero copy mechanism will not be used. 00:16:47.491 Running I/O for 4 seconds... 00:16:49.855 2997.00 IOPS, 199.02 MiB/s [2024-10-08T09:22:42.472Z] 3070.50 IOPS, 203.90 MiB/s [2024-10-08T09:22:43.406Z] 3063.00 IOPS, 203.40 MiB/s [2024-10-08T09:22:43.406Z] 3165.50 IOPS, 210.21 MiB/s 00:16:51.723 Latency(us) 00:16:51.723 [2024-10-08T09:22:43.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.723 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:51.723 ftl0 : 4.00 3164.42 210.14 0.00 0.00 333.65 155.18 45976.02 00:16:51.723 [2024-10-08T09:22:43.406Z] =================================================================================================================== 00:16:51.723 [2024-10-08T09:22:43.406Z] Total : 3164.42 210.14 0.00 0.00 333.65 155.18 45976.02 00:16:51.723 { 00:16:51.723 "results": [ 00:16:51.723 { 00:16:51.723 "job": "ftl0", 00:16:51.723 "core_mask": "0x1", 00:16:51.723 "workload": "randwrite", 00:16:51.723 "status": "finished", 00:16:51.723 "queue_depth": 1, 00:16:51.723 "io_size": 69632, 00:16:51.723 "runtime": 4.001677, 00:16:51.723 "iops": 3164.4233155249663, 00:16:51.723 "mibps": 210.1374857965798, 00:16:51.723 "io_failed": 0, 00:16:51.723 "io_timeout": 0, 00:16:51.723 "avg_latency_us": 333.64551114998875, 00:16:51.723 "min_latency_us": 155.17538461538462, 00:16:51.723 "max_latency_us": 45976.02461538462 00:16:51.723 } 00:16:51.723 ], 00:16:51.723 "core_count": 1 00:16:51.723 } 00:16:51.723 [2024-10-08 09:22:43.125317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:51.723 09:22:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:51.723 [2024-10-08 09:22:43.226604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:51.723 Running I/O for 4 seconds... 00:16:53.589 10926.00 IOPS, 42.68 MiB/s [2024-10-08T09:22:46.644Z] 11081.00 IOPS, 43.29 MiB/s [2024-10-08T09:22:47.577Z] 10872.00 IOPS, 42.47 MiB/s [2024-10-08T09:22:47.577Z] 10773.50 IOPS, 42.08 MiB/s 00:16:55.894 Latency(us) 00:16:55.894 [2024-10-08T09:22:47.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.894 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.894 ftl0 : 4.01 10765.06 42.05 0.00 0.00 11867.27 230.01 30852.33 00:16:55.894 [2024-10-08T09:22:47.577Z] =================================================================================================================== 00:16:55.894 [2024-10-08T09:22:47.577Z] Total : 10765.06 42.05 0.00 0.00 11867.27 0.00 30852.33 00:16:55.894 [2024-10-08 09:22:47.249841] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:55.894 { 00:16:55.894 "results": [ 00:16:55.894 { 00:16:55.894 "job": "ftl0", 00:16:55.894 "core_mask": "0x1", 00:16:55.895 "workload": "randwrite", 00:16:55.895 "status": "finished", 00:16:55.895 "queue_depth": 128, 00:16:55.895 "io_size": 4096, 00:16:55.895 "runtime": 4.014839, 00:16:55.895 "iops": 10765.064302703047, 00:16:55.895 "mibps": 42.05103243243378, 00:16:55.895 "io_failed": 0, 00:16:55.895 "io_timeout": 0, 00:16:55.895 "avg_latency_us": 11867.271485992953, 00:16:55.895 "min_latency_us": 230.00615384615384, 00:16:55.895 "max_latency_us": 30852.332307692308 00:16:55.895 } 00:16:55.895 ], 00:16:55.895 "core_count": 1 00:16:55.895 } 00:16:55.895 09:22:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:55.895 [2024-10-08 09:22:47.368312] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:55.895 Running I/O for 4 seconds... 00:16:57.766 9410.00 IOPS, 36.76 MiB/s [2024-10-08T09:22:50.383Z] 9347.50 IOPS, 36.51 MiB/s [2024-10-08T09:22:51.758Z] 9137.00 IOPS, 35.69 MiB/s [2024-10-08T09:22:51.758Z] 9058.25 IOPS, 35.38 MiB/s 00:17:00.075 Latency(us) 00:17:00.075 [2024-10-08T09:22:51.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.075 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:00.075 Verification LBA range: start 0x0 length 0x1400000 00:17:00.075 ftl0 : 4.01 9068.52 35.42 0.00 0.00 14068.78 220.55 46177.67 00:17:00.075 [2024-10-08T09:22:51.758Z] =================================================================================================================== 00:17:00.075 [2024-10-08T09:22:51.758Z] Total : 9068.52 35.42 0.00 0.00 14068.78 0.00 46177.67 00:17:00.075 [2024-10-08 09:22:51.393685] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:00.075 { 00:17:00.075 "results": [ 00:17:00.075 { 00:17:00.075 "job": "ftl0", 00:17:00.075 "core_mask": "0x1", 00:17:00.075 "workload": "verify", 00:17:00.075 "status": "finished", 00:17:00.075 "verify_range": { 00:17:00.075 "start": 0, 00:17:00.075 "length": 20971520 00:17:00.075 }, 00:17:00.075 "queue_depth": 128, 00:17:00.075 "io_size": 4096, 00:17:00.075 "runtime": 4.009473, 00:17:00.075 "iops": 9068.523469293845, 00:17:00.075 "mibps": 35.42391980192908, 00:17:00.075 "io_failed": 0, 00:17:00.075 "io_timeout": 0, 00:17:00.075 "avg_latency_us": 14068.781298806805, 00:17:00.075 "min_latency_us": 220.55384615384617, 00:17:00.075 "max_latency_us": 46177.67384615385 00:17:00.075 } 00:17:00.075 ], 00:17:00.075 "core_count": 1 00:17:00.075 } 00:17:00.075 09:22:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:00.075 [2024-10-08 09:22:51.600308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.075 [2024-10-08 09:22:51.600516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:00.075 [2024-10-08 09:22:51.600585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:00.075 [2024-10-08 09:22:51.600613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.075 [2024-10-08 09:22:51.600651] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:00.075 [2024-10-08 09:22:51.603489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.075 [2024-10-08 09:22:51.603520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:00.075 [2024-10-08 09:22:51.603533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.756 ms 00:17:00.075 [2024-10-08 09:22:51.603542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.075 [2024-10-08 09:22:51.605272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.075 [2024-10-08 09:22:51.605304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:00.075 [2024-10-08 09:22:51.605316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.703 ms 00:17:00.075 [2024-10-08 09:22:51.605324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.075 [2024-10-08 09:22:51.732844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.075 [2024-10-08 09:22:51.732882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:00.075 [2024-10-08 09:22:51.732896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 127.500 ms 00:17:00.075 [2024-10-08 09:22:51.732904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.075 [2024-10-08 09:22:51.737565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.075 [2024-10-08 09:22:51.737592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:00.075 [2024-10-08 09:22:51.737602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.635 ms 00:17:00.075 [2024-10-08 09:22:51.737609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.075 [2024-10-08 09:22:51.755931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.075 [2024-10-08 09:22:51.755959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:00.075 [2024-10-08 09:22:51.755970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.280 ms 00:17:00.075 [2024-10-08 09:22:51.755977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.335 [2024-10-08 09:22:51.768736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.335 [2024-10-08 09:22:51.768768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:00.335 [2024-10-08 09:22:51.768780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.729 ms 00:17:00.335 [2024-10-08 09:22:51.768786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.335 [2024-10-08 09:22:51.768897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.335 [2024-10-08 09:22:51.768907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:00.335 [2024-10-08 09:22:51.768917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:00.335 [2024-10-08 09:22:51.768923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.335 [2024-10-08 09:22:51.786402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.335 [2024-10-08 09:22:51.786550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:00.335 [2024-10-08 09:22:51.786567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.462 ms 00:17:00.335 [2024-10-08 09:22:51.786574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.335 [2024-10-08 09:22:51.803923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.335 [2024-10-08 09:22:51.803948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:00.335 [2024-10-08 09:22:51.803957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.322 ms 00:17:00.335 [2024-10-08 09:22:51.803963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.335 [2024-10-08 09:22:51.821144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.335 [2024-10-08 09:22:51.821169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:00.335 [2024-10-08 09:22:51.821178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.152 ms 00:17:00.335 [2024-10-08 09:22:51.821184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.335 [2024-10-08 09:22:51.838212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.335 [2024-10-08 09:22:51.838239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:00.335 [2024-10-08 09:22:51.838250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.972 ms 00:17:00.335 [2024-10-08 09:22:51.838256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.335 [2024-10-08 09:22:51.838283] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:00.335 [2024-10-08 09:22:51.838296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:00.335 [2024-10-08 09:22:51.838637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.838999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.839006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:00.336 [2024-10-08 09:22:51.839018] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:00.336 [2024-10-08 09:22:51.839026] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49f8589c-85ce-4515-b456-02acb6678ecb 00:17:00.336 [2024-10-08 09:22:51.839032] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:00.336 [2024-10-08 09:22:51.839039] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:00.336 [2024-10-08 09:22:51.839045] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:00.336 [2024-10-08 09:22:51.839052] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:00.336 [2024-10-08 09:22:51.839057] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:00.336 [2024-10-08 09:22:51.839064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:00.336 [2024-10-08 09:22:51.839070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:00.336 [2024-10-08 09:22:51.839077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:00.336 [2024-10-08 09:22:51.839083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:00.336 [2024-10-08 09:22:51.839089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.336 [2024-10-08 09:22:51.839095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:00.336 [2024-10-08 09:22:51.839106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:17:00.336 [2024-10-08 09:22:51.839114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.336 [2024-10-08 09:22:51.849179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.336 [2024-10-08 09:22:51.849313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:00.336 [2024-10-08 09:22:51.849329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.040 ms 00:17:00.336 [2024-10-08 09:22:51.849336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.336 [2024-10-08 09:22:51.849653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.336 [2024-10-08 09:22:51.849663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:00.336 [2024-10-08 09:22:51.849672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:17:00.336 [2024-10-08 09:22:51.849677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.336 [2024-10-08 09:22:51.874941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.336 [2024-10-08 09:22:51.875073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.336 [2024-10-08 09:22:51.875091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.336 [2024-10-08 09:22:51.875098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.336 [2024-10-08 09:22:51.875154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.336 [2024-10-08 09:22:51.875161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.336 [2024-10-08 09:22:51.875171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.336 [2024-10-08 09:22:51.875178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.336 [2024-10-08 09:22:51.875234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.336 [2024-10-08 09:22:51.875242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.336 [2024-10-08 09:22:51.875250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.336 [2024-10-08 09:22:51.875256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.336 [2024-10-08 09:22:51.875269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.336 [2024-10-08 09:22:51.875276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.336 [2024-10-08 09:22:51.875283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.336 [2024-10-08 09:22:51.875291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.336 [2024-10-08 09:22:51.938793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.336 [2024-10-08 09:22:51.938847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.336 [2024-10-08 09:22:51.938861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.336 [2024-10-08 09:22:51.938868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.336 [2024-10-08 09:22:51.989879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.336 [2024-10-08 09:22:51.990112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.336 [2024-10-08 09:22:51.990133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.336 [2024-10-08 09:22:51.990139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.336 [2024-10-08 09:22:51.990263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.336 [2024-10-08 09:22:51.990271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.336 [2024-10-08 09:22:51.990280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.336 [2024-10-08 09:22:51.990287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.336 [2024-10-08 09:22:51.990323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.337 [2024-10-08 09:22:51.990330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.337 [2024-10-08 09:22:51.990338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.337 [2024-10-08 09:22:51.990345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.337 [2024-10-08 09:22:51.990447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.337 [2024-10-08 09:22:51.990456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.337 [2024-10-08 09:22:51.990467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.337 [2024-10-08 09:22:51.990473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.337 [2024-10-08 09:22:51.990499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.337 [2024-10-08 09:22:51.990507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:00.337 [2024-10-08 09:22:51.990515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.337 [2024-10-08 09:22:51.990521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.337 [2024-10-08 09:22:51.990561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.337 [2024-10-08 09:22:51.990568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.337 [2024-10-08 09:22:51.990576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.337 [2024-10-08 09:22:51.990582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.337 [2024-10-08 09:22:51.990623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.337 [2024-10-08 09:22:51.990632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.337 [2024-10-08 09:22:51.990639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.337 [2024-10-08 09:22:51.990645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.337 [2024-10-08 09:22:51.990767] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 390.420 ms, result 0 00:17:00.337 true 00:17:00.337 09:22:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73574 00:17:00.337 09:22:52 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 73574 ']' 00:17:00.337 09:22:52 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 73574 00:17:00.595 09:22:52 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:17:00.595 09:22:52 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.595 09:22:52 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73574 00:17:00.595 killing process with pid 73574 00:17:00.595 Received shutdown signal, test time was about 4.000000 seconds 00:17:00.595 00:17:00.595 Latency(us) 00:17:00.595 [2024-10-08T09:22:52.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.595 [2024-10-08T09:22:52.278Z] =================================================================================================================== 00:17:00.595 [2024-10-08T09:22:52.278Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:00.595 09:22:52 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:00.595 09:22:52 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:00.595 09:22:52 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73574' 00:17:00.595 09:22:52 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 73574 00:17:00.595 09:22:52 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 73574 00:17:01.202 09:22:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:01.202 09:22:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:01.202 Remove shared memory files 00:17:01.202 09:22:52 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:01.203 09:22:52 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:01.203 09:22:52 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:01.203 09:22:52 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:01.203 09:22:52 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:01.203 09:22:52 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:01.203 ************************************ 00:17:01.203 END TEST ftl_bdevperf 00:17:01.203 ************************************ 00:17:01.203 00:17:01.203 real 0m20.224s 00:17:01.203 user 0m22.820s 00:17:01.203 sys 0m0.811s 00:17:01.203 09:22:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.203 09:22:52 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:01.203 09:22:52 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:01.203 09:22:52 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:01.203 09:22:52 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.203 09:22:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:01.203 ************************************ 00:17:01.203 START TEST ftl_trim 00:17:01.203 ************************************ 00:17:01.203 09:22:52 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:01.462 * Looking for test storage... 00:17:01.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.462 09:22:52 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:01.462 09:22:52 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:17:01.462 09:22:52 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:01.462 09:22:52 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.462 09:22:52 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:01.462 09:22:52 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.462 09:22:52 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:01.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.462 --rc genhtml_branch_coverage=1 00:17:01.462 --rc genhtml_function_coverage=1 00:17:01.462 --rc genhtml_legend=1 00:17:01.462 --rc geninfo_all_blocks=1 00:17:01.462 --rc geninfo_unexecuted_blocks=1 00:17:01.462 00:17:01.462 ' 00:17:01.462 09:22:52 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:01.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.462 --rc genhtml_branch_coverage=1 00:17:01.462 --rc genhtml_function_coverage=1 00:17:01.462 --rc genhtml_legend=1 00:17:01.462 --rc geninfo_all_blocks=1 00:17:01.462 --rc geninfo_unexecuted_blocks=1 00:17:01.462 00:17:01.462 ' 00:17:01.462 09:22:52 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:01.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.462 --rc genhtml_branch_coverage=1 00:17:01.462 --rc genhtml_function_coverage=1 00:17:01.462 --rc genhtml_legend=1 00:17:01.462 --rc geninfo_all_blocks=1 00:17:01.462 --rc geninfo_unexecuted_blocks=1 00:17:01.462 00:17:01.462 ' 00:17:01.462 09:22:52 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:01.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.462 --rc genhtml_branch_coverage=1 00:17:01.462 --rc genhtml_function_coverage=1 00:17:01.462 --rc genhtml_legend=1 00:17:01.462 --rc geninfo_all_blocks=1 00:17:01.462 --rc geninfo_unexecuted_blocks=1 00:17:01.462 00:17:01.462 ' 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:01.462 09:22:52 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:01.462 09:22:53 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73912 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73912 00:17:01.463 09:22:53 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73912 ']' 00:17:01.463 09:22:53 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:01.463 09:22:53 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.463 09:22:53 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.463 09:22:53 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.463 09:22:53 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.463 09:22:53 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:01.463 [2024-10-08 09:22:53.080320] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:17:01.463 [2024-10-08 09:22:53.081165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73912 ] 00:17:01.721 [2024-10-08 09:22:53.232864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.979 [2024-10-08 09:22:53.445248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.979 [2024-10-08 09:22:53.445423] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.979 [2024-10-08 09:22:53.445451] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.545 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.545 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:02.545 09:22:54 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:02.545 09:22:54 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:02.545 09:22:54 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:02.545 09:22:54 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:02.545 09:22:54 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:02.545 09:22:54 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:02.803 09:22:54 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:02.803 09:22:54 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:02.803 09:22:54 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:02.803 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:02.803 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:02.803 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:02.803 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:02.803 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:03.062 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:03.062 { 00:17:03.062 "name": "nvme0n1", 00:17:03.062 "aliases": [ 00:17:03.062 "3cd63691-4b5d-4ac6-a96f-6dc32736a6c2" 00:17:03.062 ], 00:17:03.062 "product_name": "NVMe disk", 00:17:03.062 "block_size": 4096, 00:17:03.062 "num_blocks": 1310720, 00:17:03.062 "uuid": "3cd63691-4b5d-4ac6-a96f-6dc32736a6c2", 00:17:03.062 "numa_id": -1, 00:17:03.062 "assigned_rate_limits": { 00:17:03.062 "rw_ios_per_sec": 0, 00:17:03.062 "rw_mbytes_per_sec": 0, 00:17:03.062 "r_mbytes_per_sec": 0, 00:17:03.062 "w_mbytes_per_sec": 0 00:17:03.062 }, 00:17:03.062 "claimed": true, 00:17:03.062 "claim_type": "read_many_write_one", 00:17:03.062 "zoned": false, 00:17:03.062 "supported_io_types": { 00:17:03.062 "read": true, 00:17:03.062 "write": true, 00:17:03.062 "unmap": true, 00:17:03.062 "flush": true, 00:17:03.062 "reset": true, 00:17:03.062 "nvme_admin": true, 00:17:03.062 "nvme_io": true, 00:17:03.062 "nvme_io_md": false, 00:17:03.062 "write_zeroes": true, 00:17:03.062 "zcopy": false, 00:17:03.062 "get_zone_info": false, 00:17:03.062 "zone_management": false, 00:17:03.062 "zone_append": false, 00:17:03.062 "compare": true, 00:17:03.062 "compare_and_write": false, 00:17:03.062 "abort": true, 00:17:03.062 "seek_hole": false, 00:17:03.062 "seek_data": false, 00:17:03.062 "copy": true, 00:17:03.062 "nvme_iov_md": false 00:17:03.062 }, 00:17:03.062 "driver_specific": { 00:17:03.062 "nvme": [ 00:17:03.062 { 00:17:03.062 "pci_address": "0000:00:11.0", 00:17:03.062 "trid": { 00:17:03.062 "trtype": "PCIe", 00:17:03.062 "traddr": "0000:00:11.0" 00:17:03.062 }, 00:17:03.062 "ctrlr_data": { 00:17:03.062 "cntlid": 0, 00:17:03.062 "vendor_id": "0x1b36", 00:17:03.062 "model_number": "QEMU NVMe Ctrl", 00:17:03.062 "serial_number": "12341", 00:17:03.062 "firmware_revision": "8.0.0", 00:17:03.062 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:03.062 "oacs": { 00:17:03.062 "security": 0, 00:17:03.062 "format": 1, 00:17:03.062 "firmware": 0, 00:17:03.062 "ns_manage": 1 00:17:03.062 }, 00:17:03.062 "multi_ctrlr": false, 00:17:03.062 "ana_reporting": false 00:17:03.062 }, 00:17:03.062 "vs": { 00:17:03.062 "nvme_version": "1.4" 00:17:03.062 }, 00:17:03.062 "ns_data": { 00:17:03.062 "id": 1, 00:17:03.062 "can_share": false 00:17:03.062 } 00:17:03.062 } 00:17:03.062 ], 00:17:03.062 "mp_policy": "active_passive" 00:17:03.062 } 00:17:03.062 } 00:17:03.062 ]' 00:17:03.062 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:03.062 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:03.062 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:03.062 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:03.062 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:03.062 09:22:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:17:03.062 09:22:54 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:03.062 09:22:54 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:03.062 09:22:54 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:03.062 09:22:54 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:03.062 09:22:54 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:03.321 09:22:54 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f7d57126-a47d-44e7-9309-7bd229373a26 00:17:03.321 09:22:54 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:03.321 09:22:54 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f7d57126-a47d-44e7-9309-7bd229373a26 00:17:03.579 09:22:55 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:03.838 09:22:55 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=e67d8545-2338-4b79-9ecc-1d83b9b9a784 00:17:03.838 09:22:55 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e67d8545-2338-4b79-9ecc-1d83b9b9a784 00:17:03.838 09:22:55 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=4c18b5b2-e196-4d43-8f7a-922fae65f48e 00:17:03.838 09:22:55 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4c18b5b2-e196-4d43-8f7a-922fae65f48e 00:17:03.838 09:22:55 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:03.838 09:22:55 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:03.838 09:22:55 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=4c18b5b2-e196-4d43-8f7a-922fae65f48e 00:17:03.838 09:22:55 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:03.838 09:22:55 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 4c18b5b2-e196-4d43-8f7a-922fae65f48e 00:17:03.838 09:22:55 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=4c18b5b2-e196-4d43-8f7a-922fae65f48e 00:17:03.838 09:22:55 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:03.838 09:22:55 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:03.838 09:22:55 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:03.838 09:22:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c18b5b2-e196-4d43-8f7a-922fae65f48e 00:17:04.097 09:22:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:04.097 { 00:17:04.097 "name": "4c18b5b2-e196-4d43-8f7a-922fae65f48e", 00:17:04.097 "aliases": [ 00:17:04.097 "lvs/nvme0n1p0" 00:17:04.097 ], 00:17:04.097 "product_name": "Logical Volume", 00:17:04.097 "block_size": 4096, 00:17:04.097 "num_blocks": 26476544, 00:17:04.097 "uuid": "4c18b5b2-e196-4d43-8f7a-922fae65f48e", 00:17:04.097 "assigned_rate_limits": { 00:17:04.097 "rw_ios_per_sec": 0, 00:17:04.097 "rw_mbytes_per_sec": 0, 00:17:04.097 "r_mbytes_per_sec": 0, 00:17:04.097 "w_mbytes_per_sec": 0 00:17:04.097 }, 00:17:04.097 "claimed": false, 00:17:04.097 "zoned": false, 00:17:04.097 "supported_io_types": { 00:17:04.097 "read": true, 00:17:04.097 "write": true, 00:17:04.097 "unmap": true, 00:17:04.097 "flush": false, 00:17:04.097 "reset": true, 00:17:04.097 "nvme_admin": false, 00:17:04.097 "nvme_io": false, 00:17:04.097 "nvme_io_md": false, 00:17:04.097 "write_zeroes": true, 00:17:04.097 "zcopy": false, 00:17:04.097 "get_zone_info": false, 00:17:04.097 "zone_management": false, 00:17:04.097 "zone_append": false, 00:17:04.097 "compare": false, 00:17:04.097 "compare_and_write": false, 00:17:04.097 "abort": false, 00:17:04.097 "seek_hole": true, 00:17:04.097 "seek_data": true, 00:17:04.097 "copy": false, 00:17:04.097 "nvme_iov_md": false 00:17:04.097 }, 00:17:04.097 "driver_specific": { 00:17:04.097 "lvol": { 00:17:04.097 "lvol_store_uuid": "e67d8545-2338-4b79-9ecc-1d83b9b9a784", 00:17:04.097 "base_bdev": "nvme0n1", 00:17:04.097 "thin_provision": true, 00:17:04.097 "num_allocated_clusters": 0, 00:17:04.097 "snapshot": false, 00:17:04.097 "clone": false, 00:17:04.097 "esnap_clone": false 00:17:04.097 } 00:17:04.097 } 00:17:04.097 } 00:17:04.097 ]' 00:17:04.097 09:22:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:04.097 09:22:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:04.097 09:22:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:04.097 09:22:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:04.097 09:22:55 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:04.097 09:22:55 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:04.097 09:22:55 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:04.097 09:22:55 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:04.097 09:22:55 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:04.361 09:22:56 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:04.361 09:22:56 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:04.361 09:22:56 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 4c18b5b2-e196-4d43-8f7a-922fae65f48e 00:17:04.361 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=4c18b5b2-e196-4d43-8f7a-922fae65f48e 00:17:04.361 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:04.361 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:04.361 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:04.361 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c18b5b2-e196-4d43-8f7a-922fae65f48e 00:17:04.619 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:04.619 { 00:17:04.619 "name": "4c18b5b2-e196-4d43-8f7a-922fae65f48e", 00:17:04.619 "aliases": [ 00:17:04.619 "lvs/nvme0n1p0" 00:17:04.619 ], 00:17:04.619 "product_name": "Logical Volume", 00:17:04.619 "block_size": 4096, 00:17:04.619 "num_blocks": 26476544, 00:17:04.619 "uuid": "4c18b5b2-e196-4d43-8f7a-922fae65f48e", 00:17:04.619 "assigned_rate_limits": { 00:17:04.619 "rw_ios_per_sec": 0, 00:17:04.619 "rw_mbytes_per_sec": 0, 00:17:04.619 "r_mbytes_per_sec": 0, 00:17:04.619 "w_mbytes_per_sec": 0 00:17:04.619 }, 00:17:04.619 "claimed": false, 00:17:04.619 "zoned": false, 00:17:04.619 "supported_io_types": { 00:17:04.619 "read": true, 00:17:04.619 "write": true, 00:17:04.619 "unmap": true, 00:17:04.620 "flush": false, 00:17:04.620 "reset": true, 00:17:04.620 "nvme_admin": false, 00:17:04.620 "nvme_io": false, 00:17:04.620 "nvme_io_md": false, 00:17:04.620 "write_zeroes": true, 00:17:04.620 "zcopy": false, 00:17:04.620 "get_zone_info": false, 00:17:04.620 "zone_management": false, 00:17:04.620 "zone_append": false, 00:17:04.620 "compare": false, 00:17:04.620 "compare_and_write": false, 00:17:04.620 "abort": false, 00:17:04.620 "seek_hole": true, 00:17:04.620 "seek_data": true, 00:17:04.620 "copy": false, 00:17:04.620 "nvme_iov_md": false 00:17:04.620 }, 00:17:04.620 "driver_specific": { 00:17:04.620 "lvol": { 00:17:04.620 "lvol_store_uuid": "e67d8545-2338-4b79-9ecc-1d83b9b9a784", 00:17:04.620 "base_bdev": "nvme0n1", 00:17:04.620 "thin_provision": true, 00:17:04.620 "num_allocated_clusters": 0, 00:17:04.620 "snapshot": false, 00:17:04.620 "clone": false, 00:17:04.620 "esnap_clone": false 00:17:04.620 } 00:17:04.620 } 00:17:04.620 } 00:17:04.620 ]' 00:17:04.620 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:04.620 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:04.620 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:04.620 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:04.620 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:04.620 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:04.620 09:22:56 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:04.620 09:22:56 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:04.878 09:22:56 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:04.878 09:22:56 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:04.878 09:22:56 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 4c18b5b2-e196-4d43-8f7a-922fae65f48e 00:17:04.878 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=4c18b5b2-e196-4d43-8f7a-922fae65f48e 00:17:04.878 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:04.878 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:04.878 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:04.878 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c18b5b2-e196-4d43-8f7a-922fae65f48e 00:17:05.136 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:05.136 { 00:17:05.136 "name": "4c18b5b2-e196-4d43-8f7a-922fae65f48e", 00:17:05.136 "aliases": [ 00:17:05.136 "lvs/nvme0n1p0" 00:17:05.136 ], 00:17:05.136 "product_name": "Logical Volume", 00:17:05.136 "block_size": 4096, 00:17:05.136 "num_blocks": 26476544, 00:17:05.136 "uuid": "4c18b5b2-e196-4d43-8f7a-922fae65f48e", 00:17:05.136 "assigned_rate_limits": { 00:17:05.136 "rw_ios_per_sec": 0, 00:17:05.136 "rw_mbytes_per_sec": 0, 00:17:05.136 "r_mbytes_per_sec": 0, 00:17:05.136 "w_mbytes_per_sec": 0 00:17:05.136 }, 00:17:05.136 "claimed": false, 00:17:05.136 "zoned": false, 00:17:05.136 "supported_io_types": { 00:17:05.136 "read": true, 00:17:05.136 "write": true, 00:17:05.136 "unmap": true, 00:17:05.136 "flush": false, 00:17:05.136 "reset": true, 00:17:05.136 "nvme_admin": false, 00:17:05.136 "nvme_io": false, 00:17:05.136 "nvme_io_md": false, 00:17:05.136 "write_zeroes": true, 00:17:05.136 "zcopy": false, 00:17:05.136 "get_zone_info": false, 00:17:05.136 "zone_management": false, 00:17:05.136 "zone_append": false, 00:17:05.136 "compare": false, 00:17:05.136 "compare_and_write": false, 00:17:05.136 "abort": false, 00:17:05.136 "seek_hole": true, 00:17:05.136 "seek_data": true, 00:17:05.136 "copy": false, 00:17:05.136 "nvme_iov_md": false 00:17:05.136 }, 00:17:05.136 "driver_specific": { 00:17:05.136 "lvol": { 00:17:05.136 "lvol_store_uuid": "e67d8545-2338-4b79-9ecc-1d83b9b9a784", 00:17:05.136 "base_bdev": "nvme0n1", 00:17:05.136 "thin_provision": true, 00:17:05.136 "num_allocated_clusters": 0, 00:17:05.136 "snapshot": false, 00:17:05.136 "clone": false, 00:17:05.136 "esnap_clone": false 00:17:05.136 } 00:17:05.136 } 00:17:05.136 } 00:17:05.136 ]' 00:17:05.136 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:05.136 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:05.136 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:05.136 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:05.136 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:05.137 09:22:56 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:05.137 09:22:56 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:05.137 09:22:56 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4c18b5b2-e196-4d43-8f7a-922fae65f48e -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:05.395 [2024-10-08 09:22:56.940098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.395 [2024-10-08 09:22:56.940156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:05.395 [2024-10-08 09:22:56.940172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:05.395 [2024-10-08 09:22:56.940181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.395 [2024-10-08 09:22:56.942646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.395 [2024-10-08 09:22:56.942828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.395 [2024-10-08 09:22:56.942846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.427 ms 00:17:05.395 [2024-10-08 09:22:56.942853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.395 [2024-10-08 09:22:56.942976] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:05.395 [2024-10-08 09:22:56.943547] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:05.395 [2024-10-08 09:22:56.943566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.395 [2024-10-08 09:22:56.943573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.395 [2024-10-08 09:22:56.943582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:17:05.395 [2024-10-08 09:22:56.943590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.395 [2024-10-08 09:22:56.943683] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID aa396233-677a-41d2-8a2d-a8108e4f192f 00:17:05.395 [2024-10-08 09:22:56.944986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.395 [2024-10-08 09:22:56.945016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:05.395 [2024-10-08 09:22:56.945025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:05.395 [2024-10-08 09:22:56.945034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.395 [2024-10-08 09:22:56.952090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.395 [2024-10-08 09:22:56.952117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.396 [2024-10-08 09:22:56.952126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.974 ms 00:17:05.396 [2024-10-08 09:22:56.952134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.396 [2024-10-08 09:22:56.952238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.396 [2024-10-08 09:22:56.952248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.396 [2024-10-08 09:22:56.952255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:05.396 [2024-10-08 09:22:56.952267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.396 [2024-10-08 09:22:56.952297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.396 [2024-10-08 09:22:56.952305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:05.396 [2024-10-08 09:22:56.952312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:05.396 [2024-10-08 09:22:56.952319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.396 [2024-10-08 09:22:56.952347] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:05.396 [2024-10-08 09:22:56.955660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.396 [2024-10-08 09:22:56.955684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.396 [2024-10-08 09:22:56.955693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.316 ms 00:17:05.396 [2024-10-08 09:22:56.955699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.396 [2024-10-08 09:22:56.955755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.396 [2024-10-08 09:22:56.955763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:05.396 [2024-10-08 09:22:56.955771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:05.396 [2024-10-08 09:22:56.955779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.396 [2024-10-08 09:22:56.955809] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:05.396 [2024-10-08 09:22:56.955918] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:05.396 [2024-10-08 09:22:56.955931] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:05.396 [2024-10-08 09:22:56.955954] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:05.396 [2024-10-08 09:22:56.955967] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:05.396 [2024-10-08 09:22:56.955975] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:05.396 [2024-10-08 09:22:56.955983] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:05.396 [2024-10-08 09:22:56.955988] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:05.396 [2024-10-08 09:22:56.955996] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:05.396 [2024-10-08 09:22:56.956002] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:05.396 [2024-10-08 09:22:56.956010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.396 [2024-10-08 09:22:56.956016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:05.396 [2024-10-08 09:22:56.956024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:17:05.396 [2024-10-08 09:22:56.956030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.396 [2024-10-08 09:22:56.956115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.396 [2024-10-08 09:22:56.956125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:05.396 [2024-10-08 09:22:56.956132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:05.396 [2024-10-08 09:22:56.956138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.396 [2024-10-08 09:22:56.956257] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:05.396 [2024-10-08 09:22:56.956265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:05.396 [2024-10-08 09:22:56.956274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.396 [2024-10-08 09:22:56.956280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:05.396 [2024-10-08 09:22:56.956293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:05.396 [2024-10-08 09:22:56.956306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:05.396 [2024-10-08 09:22:56.956313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.396 [2024-10-08 09:22:56.956324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:05.396 [2024-10-08 09:22:56.956331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:05.396 [2024-10-08 09:22:56.956344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.396 [2024-10-08 09:22:56.956349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:05.396 [2024-10-08 09:22:56.956355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:05.396 [2024-10-08 09:22:56.956361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:05.396 [2024-10-08 09:22:56.956374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:05.396 [2024-10-08 09:22:56.956380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:05.396 [2024-10-08 09:22:56.956409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.396 [2024-10-08 09:22:56.956422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:05.396 [2024-10-08 09:22:56.956428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.396 [2024-10-08 09:22:56.956440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:05.396 [2024-10-08 09:22:56.956447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.396 [2024-10-08 09:22:56.956458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:05.396 [2024-10-08 09:22:56.956464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.396 [2024-10-08 09:22:56.956478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:05.396 [2024-10-08 09:22:56.956487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.396 [2024-10-08 09:22:56.956500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:05.396 [2024-10-08 09:22:56.956505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:05.396 [2024-10-08 09:22:56.956512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.396 [2024-10-08 09:22:56.956517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:05.396 [2024-10-08 09:22:56.956523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:05.396 [2024-10-08 09:22:56.956529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:05.396 [2024-10-08 09:22:56.956541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:05.396 [2024-10-08 09:22:56.956547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956552] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:05.396 [2024-10-08 09:22:56.956560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:05.396 [2024-10-08 09:22:56.956568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.396 [2024-10-08 09:22:56.956576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.396 [2024-10-08 09:22:56.956582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:05.396 [2024-10-08 09:22:56.956591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:05.396 [2024-10-08 09:22:56.956597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:05.396 [2024-10-08 09:22:56.956604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:05.396 [2024-10-08 09:22:56.956609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:05.396 [2024-10-08 09:22:56.956615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:05.396 [2024-10-08 09:22:56.956623] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:05.396 [2024-10-08 09:22:56.956632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.396 [2024-10-08 09:22:56.956639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:05.396 [2024-10-08 09:22:56.956646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:05.396 [2024-10-08 09:22:56.956652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:05.396 [2024-10-08 09:22:56.956658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:05.396 [2024-10-08 09:22:56.956664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:05.396 [2024-10-08 09:22:56.956671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:05.396 [2024-10-08 09:22:56.956677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:05.396 [2024-10-08 09:22:56.956683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:05.396 [2024-10-08 09:22:56.956691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:05.396 [2024-10-08 09:22:56.956699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:05.396 [2024-10-08 09:22:56.956705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:05.396 [2024-10-08 09:22:56.956712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:05.396 [2024-10-08 09:22:56.956718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:05.397 [2024-10-08 09:22:56.956725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:05.397 [2024-10-08 09:22:56.956731] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:05.397 [2024-10-08 09:22:56.956739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.397 [2024-10-08 09:22:56.956745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:05.397 [2024-10-08 09:22:56.956754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:05.397 [2024-10-08 09:22:56.956759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:05.397 [2024-10-08 09:22:56.956767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:05.397 [2024-10-08 09:22:56.956773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.397 [2024-10-08 09:22:56.956781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:05.397 [2024-10-08 09:22:56.956786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:17:05.397 [2024-10-08 09:22:56.956794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.397 [2024-10-08 09:22:56.956865] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:05.397 [2024-10-08 09:22:56.956875] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:07.926 [2024-10-08 09:22:59.310282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.310547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:07.926 [2024-10-08 09:22:59.310660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2353.405 ms 00:17:07.926 [2024-10-08 09:22:59.310688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.348500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.348718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:07.926 [2024-10-08 09:22:59.348887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.558 ms 00:17:07.926 [2024-10-08 09:22:59.348920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.349132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.349167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:07.926 [2024-10-08 09:22:59.349240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:07.926 [2024-10-08 09:22:59.349271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.383190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.383362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:07.926 [2024-10-08 09:22:59.383755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.866 ms 00:17:07.926 [2024-10-08 09:22:59.383804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.383954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.384039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:07.926 [2024-10-08 09:22:59.384099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:07.926 [2024-10-08 09:22:59.384124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.384592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.384697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:07.926 [2024-10-08 09:22:59.384756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:17:07.926 [2024-10-08 09:22:59.384781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.384926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.384949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:07.926 [2024-10-08 09:22:59.385000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:07.926 [2024-10-08 09:22:59.385030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.401247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.401363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:07.926 [2024-10-08 09:22:59.401449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.138 ms 00:17:07.926 [2024-10-08 09:22:59.401496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.413803] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:07.926 [2024-10-08 09:22:59.431342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.431500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:07.926 [2024-10-08 09:22:59.431553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.715 ms 00:17:07.926 [2024-10-08 09:22:59.431577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.496525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.496730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:07.926 [2024-10-08 09:22:59.496790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.848 ms 00:17:07.926 [2024-10-08 09:22:59.496813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.497049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.497105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:07.926 [2024-10-08 09:22:59.497168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:17:07.926 [2024-10-08 09:22:59.497190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.521633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.521808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:07.926 [2024-10-08 09:22:59.521864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.398 ms 00:17:07.926 [2024-10-08 09:22:59.521876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.544566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.544603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:07.926 [2024-10-08 09:22:59.544617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.625 ms 00:17:07.926 [2024-10-08 09:22:59.544624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.926 [2024-10-08 09:22:59.545228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.926 [2024-10-08 09:22:59.545249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:07.926 [2024-10-08 09:22:59.545261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:17:07.926 [2024-10-08 09:22:59.545269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.185 [2024-10-08 09:22:59.618908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.185 [2024-10-08 09:22:59.619094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:08.185 [2024-10-08 09:22:59.619121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.598 ms 00:17:08.185 [2024-10-08 09:22:59.619130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.185 [2024-10-08 09:22:59.643852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.185 [2024-10-08 09:22:59.643898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:08.185 [2024-10-08 09:22:59.643913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.623 ms 00:17:08.185 [2024-10-08 09:22:59.643921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.185 [2024-10-08 09:22:59.667116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.185 [2024-10-08 09:22:59.667155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:08.185 [2024-10-08 09:22:59.667169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.124 ms 00:17:08.185 [2024-10-08 09:22:59.667177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.185 [2024-10-08 09:22:59.690041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.185 [2024-10-08 09:22:59.690207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:08.185 [2024-10-08 09:22:59.690227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.782 ms 00:17:08.185 [2024-10-08 09:22:59.690235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.185 [2024-10-08 09:22:59.690304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.185 [2024-10-08 09:22:59.690315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:08.185 [2024-10-08 09:22:59.690329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:08.185 [2024-10-08 09:22:59.690353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.185 [2024-10-08 09:22:59.690459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.185 [2024-10-08 09:22:59.690470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:08.185 [2024-10-08 09:22:59.690480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:08.185 [2024-10-08 09:22:59.690489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.185 [2024-10-08 09:22:59.691406] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:08.185 [2024-10-08 09:22:59.694402] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2750.958 ms, result 0 00:17:08.185 [2024-10-08 09:22:59.695354] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:08.185 { 00:17:08.185 "name": "ftl0", 00:17:08.185 "uuid": "aa396233-677a-41d2-8a2d-a8108e4f192f" 00:17:08.185 } 00:17:08.185 09:22:59 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:08.185 09:22:59 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:08.185 09:22:59 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:08.185 09:22:59 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:17:08.185 09:22:59 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:08.185 09:22:59 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:08.185 09:22:59 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:08.444 09:22:59 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:08.444 [ 00:17:08.444 { 00:17:08.444 "name": "ftl0", 00:17:08.444 "aliases": [ 00:17:08.444 "aa396233-677a-41d2-8a2d-a8108e4f192f" 00:17:08.444 ], 00:17:08.444 "product_name": "FTL disk", 00:17:08.444 "block_size": 4096, 00:17:08.444 "num_blocks": 23592960, 00:17:08.444 "uuid": "aa396233-677a-41d2-8a2d-a8108e4f192f", 00:17:08.444 "assigned_rate_limits": { 00:17:08.444 "rw_ios_per_sec": 0, 00:17:08.444 "rw_mbytes_per_sec": 0, 00:17:08.444 "r_mbytes_per_sec": 0, 00:17:08.444 "w_mbytes_per_sec": 0 00:17:08.444 }, 00:17:08.444 "claimed": false, 00:17:08.444 "zoned": false, 00:17:08.444 "supported_io_types": { 00:17:08.444 "read": true, 00:17:08.444 "write": true, 00:17:08.444 "unmap": true, 00:17:08.444 "flush": true, 00:17:08.444 "reset": false, 00:17:08.444 "nvme_admin": false, 00:17:08.444 "nvme_io": false, 00:17:08.444 "nvme_io_md": false, 00:17:08.444 "write_zeroes": true, 00:17:08.444 "zcopy": false, 00:17:08.444 "get_zone_info": false, 00:17:08.444 "zone_management": false, 00:17:08.444 "zone_append": false, 00:17:08.444 "compare": false, 00:17:08.444 "compare_and_write": false, 00:17:08.444 "abort": false, 00:17:08.444 "seek_hole": false, 00:17:08.444 "seek_data": false, 00:17:08.444 "copy": false, 00:17:08.444 "nvme_iov_md": false 00:17:08.444 }, 00:17:08.444 "driver_specific": { 00:17:08.444 "ftl": { 00:17:08.444 "base_bdev": "4c18b5b2-e196-4d43-8f7a-922fae65f48e", 00:17:08.444 "cache": "nvc0n1p0" 00:17:08.444 } 00:17:08.444 } 00:17:08.444 } 00:17:08.444 ] 00:17:08.702 09:23:00 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:17:08.702 09:23:00 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:08.702 09:23:00 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:08.702 09:23:00 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:08.702 09:23:00 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:08.961 09:23:00 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:08.961 { 00:17:08.961 "name": "ftl0", 00:17:08.961 "aliases": [ 00:17:08.961 "aa396233-677a-41d2-8a2d-a8108e4f192f" 00:17:08.961 ], 00:17:08.961 "product_name": "FTL disk", 00:17:08.961 "block_size": 4096, 00:17:08.961 "num_blocks": 23592960, 00:17:08.961 "uuid": "aa396233-677a-41d2-8a2d-a8108e4f192f", 00:17:08.961 "assigned_rate_limits": { 00:17:08.961 "rw_ios_per_sec": 0, 00:17:08.961 "rw_mbytes_per_sec": 0, 00:17:08.961 "r_mbytes_per_sec": 0, 00:17:08.961 "w_mbytes_per_sec": 0 00:17:08.961 }, 00:17:08.961 "claimed": false, 00:17:08.961 "zoned": false, 00:17:08.961 "supported_io_types": { 00:17:08.961 "read": true, 00:17:08.961 "write": true, 00:17:08.961 "unmap": true, 00:17:08.961 "flush": true, 00:17:08.961 "reset": false, 00:17:08.961 "nvme_admin": false, 00:17:08.961 "nvme_io": false, 00:17:08.961 "nvme_io_md": false, 00:17:08.961 "write_zeroes": true, 00:17:08.961 "zcopy": false, 00:17:08.961 "get_zone_info": false, 00:17:08.961 "zone_management": false, 00:17:08.961 "zone_append": false, 00:17:08.961 "compare": false, 00:17:08.961 "compare_and_write": false, 00:17:08.961 "abort": false, 00:17:08.961 "seek_hole": false, 00:17:08.961 "seek_data": false, 00:17:08.961 "copy": false, 00:17:08.961 "nvme_iov_md": false 00:17:08.961 }, 00:17:08.961 "driver_specific": { 00:17:08.961 "ftl": { 00:17:08.961 "base_bdev": "4c18b5b2-e196-4d43-8f7a-922fae65f48e", 00:17:08.961 "cache": "nvc0n1p0" 00:17:08.961 } 00:17:08.961 } 00:17:08.961 } 00:17:08.961 ]' 00:17:08.961 09:23:00 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:08.961 09:23:00 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:08.961 09:23:00 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:09.221 [2024-10-08 09:23:00.775022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.221 [2024-10-08 09:23:00.775086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:09.221 [2024-10-08 09:23:00.775099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:09.221 [2024-10-08 09:23:00.775107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.221 [2024-10-08 09:23:00.775139] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:09.221 [2024-10-08 09:23:00.777377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.221 [2024-10-08 09:23:00.777411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:09.221 [2024-10-08 09:23:00.777425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.219 ms 00:17:09.221 [2024-10-08 09:23:00.777431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.221 [2024-10-08 09:23:00.777915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.221 [2024-10-08 09:23:00.777934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:09.221 [2024-10-08 09:23:00.777945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:17:09.221 [2024-10-08 09:23:00.777951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.221 [2024-10-08 09:23:00.780711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.221 [2024-10-08 09:23:00.780873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:09.221 [2024-10-08 09:23:00.780888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.736 ms 00:17:09.221 [2024-10-08 09:23:00.780894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.221 [2024-10-08 09:23:00.786194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.221 [2024-10-08 09:23:00.786219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:09.221 [2024-10-08 09:23:00.786230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.251 ms 00:17:09.221 [2024-10-08 09:23:00.786238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.221 [2024-10-08 09:23:00.805324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.221 [2024-10-08 09:23:00.805353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:09.221 [2024-10-08 09:23:00.805368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.022 ms 00:17:09.221 [2024-10-08 09:23:00.805375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.221 [2024-10-08 09:23:00.818246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.221 [2024-10-08 09:23:00.818275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:09.221 [2024-10-08 09:23:00.818287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.791 ms 00:17:09.221 [2024-10-08 09:23:00.818295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.221 [2024-10-08 09:23:00.818492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.221 [2024-10-08 09:23:00.818502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:09.221 [2024-10-08 09:23:00.818511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:17:09.221 [2024-10-08 09:23:00.818518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.221 [2024-10-08 09:23:00.836612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.221 [2024-10-08 09:23:00.836639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:09.221 [2024-10-08 09:23:00.836650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.060 ms 00:17:09.221 [2024-10-08 09:23:00.836656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.221 [2024-10-08 09:23:00.853835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.221 [2024-10-08 09:23:00.853861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:09.221 [2024-10-08 09:23:00.853874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.132 ms 00:17:09.221 [2024-10-08 09:23:00.853880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.221 [2024-10-08 09:23:00.870861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.221 [2024-10-08 09:23:00.870890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:09.221 [2024-10-08 09:23:00.870901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.913 ms 00:17:09.221 [2024-10-08 09:23:00.870906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.221 [2024-10-08 09:23:00.887795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.221 [2024-10-08 09:23:00.887825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:09.221 [2024-10-08 09:23:00.887836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.784 ms 00:17:09.221 [2024-10-08 09:23:00.887842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.221 [2024-10-08 09:23:00.887896] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:09.221 [2024-10-08 09:23:00.887910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.887963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.887969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.887978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.887984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.887995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:09.221 [2024-10-08 09:23:00.888330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:09.222 [2024-10-08 09:23:00.888702] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:09.222 [2024-10-08 09:23:00.888714] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa396233-677a-41d2-8a2d-a8108e4f192f 00:17:09.222 [2024-10-08 09:23:00.888720] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:09.222 [2024-10-08 09:23:00.888728] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:09.222 [2024-10-08 09:23:00.888734] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:09.222 [2024-10-08 09:23:00.888742] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:09.222 [2024-10-08 09:23:00.888747] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:09.222 [2024-10-08 09:23:00.888755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:09.222 [2024-10-08 09:23:00.888761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:09.222 [2024-10-08 09:23:00.888768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:09.222 [2024-10-08 09:23:00.888778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:09.222 [2024-10-08 09:23:00.888786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.222 [2024-10-08 09:23:00.888791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:09.222 [2024-10-08 09:23:00.888800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:17:09.222 [2024-10-08 09:23:00.888806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.222 [2024-10-08 09:23:00.898886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.222 [2024-10-08 09:23:00.899043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:09.222 [2024-10-08 09:23:00.899062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.048 ms 00:17:09.222 [2024-10-08 09:23:00.899068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.222 [2024-10-08 09:23:00.899427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.222 [2024-10-08 09:23:00.899439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:09.222 [2024-10-08 09:23:00.899450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:17:09.222 [2024-10-08 09:23:00.899456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:00.935534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.481 [2024-10-08 09:23:00.935575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.481 [2024-10-08 09:23:00.935586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.481 [2024-10-08 09:23:00.935593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:00.935697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.481 [2024-10-08 09:23:00.935705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.481 [2024-10-08 09:23:00.935716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.481 [2024-10-08 09:23:00.935722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:00.935785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.481 [2024-10-08 09:23:00.935793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.481 [2024-10-08 09:23:00.935804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.481 [2024-10-08 09:23:00.935810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:00.935836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.481 [2024-10-08 09:23:00.935843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.481 [2024-10-08 09:23:00.935851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.481 [2024-10-08 09:23:00.935857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:01.001917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.481 [2024-10-08 09:23:01.002126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.481 [2024-10-08 09:23:01.002144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.481 [2024-10-08 09:23:01.002151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:01.053034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.481 [2024-10-08 09:23:01.053240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.481 [2024-10-08 09:23:01.053258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.481 [2024-10-08 09:23:01.053267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:01.053360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.481 [2024-10-08 09:23:01.053368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.481 [2024-10-08 09:23:01.053379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.481 [2024-10-08 09:23:01.053385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:01.053447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.481 [2024-10-08 09:23:01.053454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.481 [2024-10-08 09:23:01.053475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.481 [2024-10-08 09:23:01.053480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:01.053586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.481 [2024-10-08 09:23:01.053593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.481 [2024-10-08 09:23:01.053601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.481 [2024-10-08 09:23:01.053607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:01.053653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.481 [2024-10-08 09:23:01.053661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:09.481 [2024-10-08 09:23:01.053669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.481 [2024-10-08 09:23:01.053674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:01.053724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.481 [2024-10-08 09:23:01.053731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:09.481 [2024-10-08 09:23:01.053741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.481 [2024-10-08 09:23:01.053748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:01.053803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.481 [2024-10-08 09:23:01.053811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:09.481 [2024-10-08 09:23:01.053821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.481 [2024-10-08 09:23:01.053827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.481 [2024-10-08 09:23:01.054012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.974 ms, result 0 00:17:09.481 true 00:17:09.481 09:23:01 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73912 00:17:09.481 09:23:01 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73912 ']' 00:17:09.481 09:23:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73912 00:17:09.481 09:23:01 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:09.481 09:23:01 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.481 09:23:01 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73912 00:17:09.481 killing process with pid 73912 00:17:09.481 09:23:01 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.481 09:23:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.481 09:23:01 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73912' 00:17:09.481 09:23:01 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73912 00:17:09.481 09:23:01 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73912 00:17:16.042 09:23:06 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:16.042 65536+0 records in 00:17:16.042 65536+0 records out 00:17:16.042 268435456 bytes (268 MB, 256 MiB) copied, 1.08215 s, 248 MB/s 00:17:16.042 09:23:07 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:16.042 [2024-10-08 09:23:07.604686] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:17:16.042 [2024-10-08 09:23:07.604809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74094 ] 00:17:16.301 [2024-10-08 09:23:07.750551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.301 [2024-10-08 09:23:07.933180] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.559 [2024-10-08 09:23:08.161749] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:16.559 [2024-10-08 09:23:08.161815] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:16.820 [2024-10-08 09:23:08.315492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.820 [2024-10-08 09:23:08.315550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:16.820 [2024-10-08 09:23:08.315565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:16.820 [2024-10-08 09:23:08.315572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.820 [2024-10-08 09:23:08.317791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.820 [2024-10-08 09:23:08.317819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:16.820 [2024-10-08 09:23:08.317827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.204 ms 00:17:16.820 [2024-10-08 09:23:08.317834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.820 [2024-10-08 09:23:08.317900] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:16.820 [2024-10-08 09:23:08.318424] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:16.820 [2024-10-08 09:23:08.318441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.820 [2024-10-08 09:23:08.318448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:16.820 [2024-10-08 09:23:08.318459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:17:16.820 [2024-10-08 09:23:08.318466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.820 [2024-10-08 09:23:08.319812] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:16.820 [2024-10-08 09:23:08.330059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.820 [2024-10-08 09:23:08.330088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:16.820 [2024-10-08 09:23:08.330098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.249 ms 00:17:16.820 [2024-10-08 09:23:08.330105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.820 [2024-10-08 09:23:08.330187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.820 [2024-10-08 09:23:08.330197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:16.820 [2024-10-08 09:23:08.330206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:16.820 [2024-10-08 09:23:08.330212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.820 [2024-10-08 09:23:08.336552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.820 [2024-10-08 09:23:08.336580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:16.820 [2024-10-08 09:23:08.336588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.304 ms 00:17:16.820 [2024-10-08 09:23:08.336594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.820 [2024-10-08 09:23:08.336678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.820 [2024-10-08 09:23:08.336689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:16.820 [2024-10-08 09:23:08.336696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:16.820 [2024-10-08 09:23:08.336703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.820 [2024-10-08 09:23:08.336726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.820 [2024-10-08 09:23:08.336732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:16.820 [2024-10-08 09:23:08.336739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:16.820 [2024-10-08 09:23:08.336745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.820 [2024-10-08 09:23:08.336763] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:16.820 [2024-10-08 09:23:08.339696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.820 [2024-10-08 09:23:08.339718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:16.820 [2024-10-08 09:23:08.339727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.938 ms 00:17:16.820 [2024-10-08 09:23:08.339733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.820 [2024-10-08 09:23:08.339764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.820 [2024-10-08 09:23:08.339774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:16.820 [2024-10-08 09:23:08.339781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:16.820 [2024-10-08 09:23:08.339787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.820 [2024-10-08 09:23:08.339803] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:16.820 [2024-10-08 09:23:08.339819] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:16.820 [2024-10-08 09:23:08.339850] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:16.820 [2024-10-08 09:23:08.339863] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:16.820 [2024-10-08 09:23:08.339951] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:16.820 [2024-10-08 09:23:08.339960] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:16.820 [2024-10-08 09:23:08.339969] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:16.820 [2024-10-08 09:23:08.339977] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:16.820 [2024-10-08 09:23:08.339984] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:16.820 [2024-10-08 09:23:08.339990] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:16.820 [2024-10-08 09:23:08.339998] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:16.820 [2024-10-08 09:23:08.340004] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:16.820 [2024-10-08 09:23:08.340009] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:16.820 [2024-10-08 09:23:08.340016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.820 [2024-10-08 09:23:08.340022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:16.820 [2024-10-08 09:23:08.340031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:17:16.820 [2024-10-08 09:23:08.340037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.820 [2024-10-08 09:23:08.340106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.820 [2024-10-08 09:23:08.340113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:16.820 [2024-10-08 09:23:08.340120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:16.820 [2024-10-08 09:23:08.340126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.820 [2024-10-08 09:23:08.340218] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:16.820 [2024-10-08 09:23:08.340227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:16.820 [2024-10-08 09:23:08.340234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:16.820 [2024-10-08 09:23:08.340243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.820 [2024-10-08 09:23:08.340249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:16.820 [2024-10-08 09:23:08.340254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:16.820 [2024-10-08 09:23:08.340260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:16.820 [2024-10-08 09:23:08.340267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:16.820 [2024-10-08 09:23:08.340274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:16.820 [2024-10-08 09:23:08.340279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:16.820 [2024-10-08 09:23:08.340285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:16.820 [2024-10-08 09:23:08.340297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:16.820 [2024-10-08 09:23:08.340302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:16.820 [2024-10-08 09:23:08.340307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:16.820 [2024-10-08 09:23:08.340312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:16.820 [2024-10-08 09:23:08.340318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.820 [2024-10-08 09:23:08.340324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:16.820 [2024-10-08 09:23:08.340332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:16.820 [2024-10-08 09:23:08.340337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.820 [2024-10-08 09:23:08.340343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:16.820 [2024-10-08 09:23:08.340348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:16.820 [2024-10-08 09:23:08.340353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.820 [2024-10-08 09:23:08.340358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:16.820 [2024-10-08 09:23:08.340363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:16.820 [2024-10-08 09:23:08.340369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.821 [2024-10-08 09:23:08.340374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:16.821 [2024-10-08 09:23:08.340379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:16.821 [2024-10-08 09:23:08.340385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.821 [2024-10-08 09:23:08.340405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:16.821 [2024-10-08 09:23:08.340411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:16.821 [2024-10-08 09:23:08.340416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.821 [2024-10-08 09:23:08.340421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:16.821 [2024-10-08 09:23:08.340427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:16.821 [2024-10-08 09:23:08.340432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:16.821 [2024-10-08 09:23:08.340438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:16.821 [2024-10-08 09:23:08.340443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:16.821 [2024-10-08 09:23:08.340449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:16.821 [2024-10-08 09:23:08.340454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:16.821 [2024-10-08 09:23:08.340461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:16.821 [2024-10-08 09:23:08.340466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.821 [2024-10-08 09:23:08.340471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:16.821 [2024-10-08 09:23:08.340476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:16.821 [2024-10-08 09:23:08.340483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.821 [2024-10-08 09:23:08.340491] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:16.821 [2024-10-08 09:23:08.340497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:16.821 [2024-10-08 09:23:08.340512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:16.821 [2024-10-08 09:23:08.340518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.821 [2024-10-08 09:23:08.340524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:16.821 [2024-10-08 09:23:08.340529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:16.821 [2024-10-08 09:23:08.340535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:16.821 [2024-10-08 09:23:08.340541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:16.821 [2024-10-08 09:23:08.340546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:16.821 [2024-10-08 09:23:08.340552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:16.821 [2024-10-08 09:23:08.340558] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:16.821 [2024-10-08 09:23:08.340565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:16.821 [2024-10-08 09:23:08.340576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:16.821 [2024-10-08 09:23:08.340582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:16.821 [2024-10-08 09:23:08.340588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:16.821 [2024-10-08 09:23:08.340594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:16.821 [2024-10-08 09:23:08.340599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:16.821 [2024-10-08 09:23:08.340605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:16.821 [2024-10-08 09:23:08.340610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:16.821 [2024-10-08 09:23:08.340616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:16.821 [2024-10-08 09:23:08.340626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:16.821 [2024-10-08 09:23:08.340633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:16.821 [2024-10-08 09:23:08.340638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:16.821 [2024-10-08 09:23:08.340643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:16.821 [2024-10-08 09:23:08.340649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:16.821 [2024-10-08 09:23:08.340655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:16.821 [2024-10-08 09:23:08.340661] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:16.821 [2024-10-08 09:23:08.340668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:16.821 [2024-10-08 09:23:08.340675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:16.821 [2024-10-08 09:23:08.340681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:16.821 [2024-10-08 09:23:08.340687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:16.821 [2024-10-08 09:23:08.340692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:16.821 [2024-10-08 09:23:08.340698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.340706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:16.821 [2024-10-08 09:23:08.340711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:17:16.821 [2024-10-08 09:23:08.340717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.821 [2024-10-08 09:23:08.376966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.377030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:16.821 [2024-10-08 09:23:08.377049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.193 ms 00:17:16.821 [2024-10-08 09:23:08.377060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.821 [2024-10-08 09:23:08.377265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.377283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:16.821 [2024-10-08 09:23:08.377296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:16.821 [2024-10-08 09:23:08.377307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.821 [2024-10-08 09:23:08.403875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.403913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:16.821 [2024-10-08 09:23:08.403923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.538 ms 00:17:16.821 [2024-10-08 09:23:08.403930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.821 [2024-10-08 09:23:08.404001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.404009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:16.821 [2024-10-08 09:23:08.404016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:16.821 [2024-10-08 09:23:08.404023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.821 [2024-10-08 09:23:08.404428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.404443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:16.821 [2024-10-08 09:23:08.404451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:17:16.821 [2024-10-08 09:23:08.404457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.821 [2024-10-08 09:23:08.404577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.404585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:16.821 [2024-10-08 09:23:08.404592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:16.821 [2024-10-08 09:23:08.404599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.821 [2024-10-08 09:23:08.416011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.416040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:16.821 [2024-10-08 09:23:08.416049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.394 ms 00:17:16.821 [2024-10-08 09:23:08.416055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.821 [2024-10-08 09:23:08.426281] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:16.821 [2024-10-08 09:23:08.426501] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:16.821 [2024-10-08 09:23:08.426519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.426526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:16.821 [2024-10-08 09:23:08.426535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.350 ms 00:17:16.821 [2024-10-08 09:23:08.426540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.821 [2024-10-08 09:23:08.445875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.446037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:16.821 [2024-10-08 09:23:08.446056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.886 ms 00:17:16.821 [2024-10-08 09:23:08.446070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.821 [2024-10-08 09:23:08.455436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.455470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:16.821 [2024-10-08 09:23:08.455479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.292 ms 00:17:16.821 [2024-10-08 09:23:08.455485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.821 [2024-10-08 09:23:08.464333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.464361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:16.821 [2024-10-08 09:23:08.464370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.798 ms 00:17:16.821 [2024-10-08 09:23:08.464376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.821 [2024-10-08 09:23:08.464897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.821 [2024-10-08 09:23:08.465017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:16.821 [2024-10-08 09:23:08.465029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:17:16.821 [2024-10-08 09:23:08.465036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.079 [2024-10-08 09:23:08.513542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.079 [2024-10-08 09:23:08.513599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:17.079 [2024-10-08 09:23:08.513611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.480 ms 00:17:17.079 [2024-10-08 09:23:08.513619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.079 [2024-10-08 09:23:08.522490] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:17.079 [2024-10-08 09:23:08.537962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.079 [2024-10-08 09:23:08.538014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:17.079 [2024-10-08 09:23:08.538027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.236 ms 00:17:17.079 [2024-10-08 09:23:08.538033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.079 [2024-10-08 09:23:08.538156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.079 [2024-10-08 09:23:08.538165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:17.079 [2024-10-08 09:23:08.538173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:17.079 [2024-10-08 09:23:08.538179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.079 [2024-10-08 09:23:08.538229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.079 [2024-10-08 09:23:08.538242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:17.079 [2024-10-08 09:23:08.538251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:17.079 [2024-10-08 09:23:08.538257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.079 [2024-10-08 09:23:08.538275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.079 [2024-10-08 09:23:08.538282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:17.079 [2024-10-08 09:23:08.538289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:17.079 [2024-10-08 09:23:08.538295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.079 [2024-10-08 09:23:08.538325] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:17.079 [2024-10-08 09:23:08.538333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.079 [2024-10-08 09:23:08.538339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:17.079 [2024-10-08 09:23:08.538347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:17.079 [2024-10-08 09:23:08.538355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.079 [2024-10-08 09:23:08.556790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.080 [2024-10-08 09:23:08.556825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:17.080 [2024-10-08 09:23:08.556836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.418 ms 00:17:17.080 [2024-10-08 09:23:08.556843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.080 [2024-10-08 09:23:08.556931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.080 [2024-10-08 09:23:08.556942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:17.080 [2024-10-08 09:23:08.556951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:17.080 [2024-10-08 09:23:08.556958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.080 [2024-10-08 09:23:08.557753] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:17.080 [2024-10-08 09:23:08.560130] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 241.983 ms, result 0 00:17:17.080 [2024-10-08 09:23:08.561204] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:17.080 [2024-10-08 09:23:08.572022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.040  [2024-10-08T09:23:10.655Z] Copying: 39/256 [MB] (39 MBps) [2024-10-08T09:23:11.589Z] Copying: 83/256 [MB] (43 MBps) [2024-10-08T09:23:12.963Z] Copying: 122/256 [MB] (39 MBps) [2024-10-08T09:23:13.898Z] Copying: 164/256 [MB] (41 MBps) [2024-10-08T09:23:14.833Z] Copying: 205/256 [MB] (41 MBps) [2024-10-08T09:23:14.833Z] Copying: 248/256 [MB] (42 MBps) [2024-10-08T09:23:14.833Z] Copying: 256/256 [MB] (average 41 MBps)[2024-10-08 09:23:14.755344] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.150 [2024-10-08 09:23:14.765200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.150 [2024-10-08 09:23:14.765246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.150 [2024-10-08 09:23:14.765261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.150 [2024-10-08 09:23:14.765269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.150 [2024-10-08 09:23:14.765292] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:23.150 [2024-10-08 09:23:14.768191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.150 [2024-10-08 09:23:14.768221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.150 [2024-10-08 09:23:14.768232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.885 ms 00:17:23.150 [2024-10-08 09:23:14.768240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.150 [2024-10-08 09:23:14.770066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.150 [2024-10-08 09:23:14.770217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.150 [2024-10-08 09:23:14.770234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.802 ms 00:17:23.150 [2024-10-08 09:23:14.770248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.150 [2024-10-08 09:23:14.777304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.150 [2024-10-08 09:23:14.777402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.150 [2024-10-08 09:23:14.777463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.037 ms 00:17:23.150 [2024-10-08 09:23:14.777486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.150 [2024-10-08 09:23:14.784467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.150 [2024-10-08 09:23:14.784568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:23.150 [2024-10-08 09:23:14.784626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.927 ms 00:17:23.150 [2024-10-08 09:23:14.784655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.150 [2024-10-08 09:23:14.807676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.150 [2024-10-08 09:23:14.807799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.150 [2024-10-08 09:23:14.807858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.964 ms 00:17:23.150 [2024-10-08 09:23:14.807880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.150 [2024-10-08 09:23:14.822414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.150 [2024-10-08 09:23:14.822535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.150 [2024-10-08 09:23:14.822617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.490 ms 00:17:23.150 [2024-10-08 09:23:14.822639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.150 [2024-10-08 09:23:14.822784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.150 [2024-10-08 09:23:14.822810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.150 [2024-10-08 09:23:14.822831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:23.150 [2024-10-08 09:23:14.822876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-10-08 09:23:14.845899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.410 [2024-10-08 09:23:14.846028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:23.410 [2024-10-08 09:23:14.846078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.991 ms 00:17:23.410 [2024-10-08 09:23:14.846100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-10-08 09:23:14.868747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.410 [2024-10-08 09:23:14.868850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:23.410 [2024-10-08 09:23:14.868899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.583 ms 00:17:23.410 [2024-10-08 09:23:14.868922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-10-08 09:23:14.891048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.410 [2024-10-08 09:23:14.891149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.410 [2024-10-08 09:23:14.891196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.085 ms 00:17:23.410 [2024-10-08 09:23:14.891218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-10-08 09:23:14.913589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.410 [2024-10-08 09:23:14.913694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.410 [2024-10-08 09:23:14.913742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.301 ms 00:17:23.410 [2024-10-08 09:23:14.913764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.410 [2024-10-08 09:23:14.913804] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.410 [2024-10-08 09:23:14.913833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.913866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.913895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.913924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.913988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.914857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.915983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.410 [2024-10-08 09:23:14.916428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.411 [2024-10-08 09:23:14.916744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.411 [2024-10-08 09:23:14.916753] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa396233-677a-41d2-8a2d-a8108e4f192f 00:17:23.411 [2024-10-08 09:23:14.916762] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.411 [2024-10-08 09:23:14.916770] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.411 [2024-10-08 09:23:14.916777] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.411 [2024-10-08 09:23:14.916785] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.411 [2024-10-08 09:23:14.916795] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.411 [2024-10-08 09:23:14.916802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.411 [2024-10-08 09:23:14.916810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.411 [2024-10-08 09:23:14.916817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.411 [2024-10-08 09:23:14.916824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.411 [2024-10-08 09:23:14.916831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.411 [2024-10-08 09:23:14.916840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.411 [2024-10-08 09:23:14.916849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.028 ms 00:17:23.411 [2024-10-08 09:23:14.916857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-10-08 09:23:14.930361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.411 [2024-10-08 09:23:14.930472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.411 [2024-10-08 09:23:14.930525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.468 ms 00:17:23.411 [2024-10-08 09:23:14.930567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-10-08 09:23:14.930956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.411 [2024-10-08 09:23:14.931022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.411 [2024-10-08 09:23:14.931106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:17:23.411 [2024-10-08 09:23:14.931130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-10-08 09:23:14.963298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.411 [2024-10-08 09:23:14.963448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.411 [2024-10-08 09:23:14.963501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.411 [2024-10-08 09:23:14.963523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-10-08 09:23:14.963640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.411 [2024-10-08 09:23:14.963665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.411 [2024-10-08 09:23:14.963684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.411 [2024-10-08 09:23:14.963703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-10-08 09:23:14.963755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.411 [2024-10-08 09:23:14.963892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.411 [2024-10-08 09:23:14.963916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.411 [2024-10-08 09:23:14.963935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-10-08 09:23:14.963964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.411 [2024-10-08 09:23:14.963985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.411 [2024-10-08 09:23:14.964004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.411 [2024-10-08 09:23:14.964071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.411 [2024-10-08 09:23:15.046310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.411 [2024-10-08 09:23:15.046533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.411 [2024-10-08 09:23:15.046591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.411 [2024-10-08 09:23:15.046613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.670 [2024-10-08 09:23:15.113455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.670 [2024-10-08 09:23:15.113508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.670 [2024-10-08 09:23:15.113521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.670 [2024-10-08 09:23:15.113530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.670 [2024-10-08 09:23:15.113618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.670 [2024-10-08 09:23:15.113629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.670 [2024-10-08 09:23:15.113638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.670 [2024-10-08 09:23:15.113645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.670 [2024-10-08 09:23:15.113678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.670 [2024-10-08 09:23:15.113687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.670 [2024-10-08 09:23:15.113694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.670 [2024-10-08 09:23:15.113703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.670 [2024-10-08 09:23:15.113800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.670 [2024-10-08 09:23:15.113811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.670 [2024-10-08 09:23:15.113820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.670 [2024-10-08 09:23:15.113827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.670 [2024-10-08 09:23:15.113861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.670 [2024-10-08 09:23:15.113871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:23.670 [2024-10-08 09:23:15.113879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.670 [2024-10-08 09:23:15.113887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.670 [2024-10-08 09:23:15.113931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.670 [2024-10-08 09:23:15.113939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.670 [2024-10-08 09:23:15.113948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.670 [2024-10-08 09:23:15.113956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.670 [2024-10-08 09:23:15.114008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.670 [2024-10-08 09:23:15.114018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.670 [2024-10-08 09:23:15.114027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.670 [2024-10-08 09:23:15.114034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.670 [2024-10-08 09:23:15.114184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.969 ms, result 0 00:17:24.605 00:17:24.605 00:17:24.605 09:23:16 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=74191 00:17:24.605 09:23:16 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 74191 00:17:24.605 09:23:16 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:24.605 09:23:16 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74191 ']' 00:17:24.605 09:23:16 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.605 09:23:16 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.605 09:23:16 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.605 09:23:16 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.605 09:23:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:24.864 [2024-10-08 09:23:16.306738] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:17:24.864 [2024-10-08 09:23:16.307241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74191 ] 00:17:24.864 [2024-10-08 09:23:16.457182] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.122 [2024-10-08 09:23:16.664468] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.689 09:23:17 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:25.689 09:23:17 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:25.689 09:23:17 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:25.947 [2024-10-08 09:23:17.526084] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.947 [2024-10-08 09:23:17.526167] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.206 [2024-10-08 09:23:17.697875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.206 [2024-10-08 09:23:17.697931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.206 [2024-10-08 09:23:17.697947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:26.206 [2024-10-08 09:23:17.697955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.206 [2024-10-08 09:23:17.700973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.206 [2024-10-08 09:23:17.701179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.206 [2024-10-08 09:23:17.701201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.995 ms 00:17:26.206 [2024-10-08 09:23:17.701211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.206 [2024-10-08 09:23:17.701439] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.206 [2024-10-08 09:23:17.702168] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.206 [2024-10-08 09:23:17.702193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.206 [2024-10-08 09:23:17.702202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.206 [2024-10-08 09:23:17.702213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:17:26.206 [2024-10-08 09:23:17.702220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.206 [2024-10-08 09:23:17.703628] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:26.206 [2024-10-08 09:23:17.716646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.206 [2024-10-08 09:23:17.716682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:26.206 [2024-10-08 09:23:17.716694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.024 ms 00:17:26.206 [2024-10-08 09:23:17.716704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.206 [2024-10-08 09:23:17.716788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.206 [2024-10-08 09:23:17.716803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:26.206 [2024-10-08 09:23:17.716812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:26.206 [2024-10-08 09:23:17.716821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.206 [2024-10-08 09:23:17.723476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.206 [2024-10-08 09:23:17.723651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.206 [2024-10-08 09:23:17.723667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.607 ms 00:17:26.206 [2024-10-08 09:23:17.723677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.206 [2024-10-08 09:23:17.723788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.206 [2024-10-08 09:23:17.723801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.206 [2024-10-08 09:23:17.723811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:26.206 [2024-10-08 09:23:17.723819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.206 [2024-10-08 09:23:17.723845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.206 [2024-10-08 09:23:17.723857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.206 [2024-10-08 09:23:17.723866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:26.206 [2024-10-08 09:23:17.723875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.206 [2024-10-08 09:23:17.723899] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:26.206 [2024-10-08 09:23:17.727400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.206 [2024-10-08 09:23:17.727426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.206 [2024-10-08 09:23:17.727437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.504 ms 00:17:26.206 [2024-10-08 09:23:17.727447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.206 [2024-10-08 09:23:17.727494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.206 [2024-10-08 09:23:17.727504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.206 [2024-10-08 09:23:17.727514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:26.206 [2024-10-08 09:23:17.727522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.206 [2024-10-08 09:23:17.727545] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:26.206 [2024-10-08 09:23:17.727563] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:26.206 [2024-10-08 09:23:17.727606] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:26.206 [2024-10-08 09:23:17.727628] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:26.206 [2024-10-08 09:23:17.727738] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.206 [2024-10-08 09:23:17.727750] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.206 [2024-10-08 09:23:17.727763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.206 [2024-10-08 09:23:17.727774] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.206 [2024-10-08 09:23:17.727785] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.206 [2024-10-08 09:23:17.727793] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:26.206 [2024-10-08 09:23:17.727802] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.206 [2024-10-08 09:23:17.727809] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.207 [2024-10-08 09:23:17.727822] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.207 [2024-10-08 09:23:17.727832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.207 [2024-10-08 09:23:17.727841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.207 [2024-10-08 09:23:17.727848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:17:26.207 [2024-10-08 09:23:17.727856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.207 [2024-10-08 09:23:17.727957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.207 [2024-10-08 09:23:17.727968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.207 [2024-10-08 09:23:17.727976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:26.207 [2024-10-08 09:23:17.727985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.207 [2024-10-08 09:23:17.728089] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.207 [2024-10-08 09:23:17.728103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.207 [2024-10-08 09:23:17.728111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.207 [2024-10-08 09:23:17.728133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.207 [2024-10-08 09:23:17.728149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:26.207 [2024-10-08 09:23:17.728169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.207 [2024-10-08 09:23:17.728177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.207 [2024-10-08 09:23:17.728191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.207 [2024-10-08 09:23:17.728200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:26.207 [2024-10-08 09:23:17.728206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.207 [2024-10-08 09:23:17.728215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.207 [2024-10-08 09:23:17.728221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:26.207 [2024-10-08 09:23:17.728230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.207 [2024-10-08 09:23:17.728246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:26.207 [2024-10-08 09:23:17.728258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.207 [2024-10-08 09:23:17.728274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.207 [2024-10-08 09:23:17.728288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.207 [2024-10-08 09:23:17.728297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.207 [2024-10-08 09:23:17.728313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.207 [2024-10-08 09:23:17.728320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.207 [2024-10-08 09:23:17.728336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.207 [2024-10-08 09:23:17.728344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.207 [2024-10-08 09:23:17.728360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.207 [2024-10-08 09:23:17.728367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.207 [2024-10-08 09:23:17.728381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.207 [2024-10-08 09:23:17.728409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:26.207 [2024-10-08 09:23:17.728416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.207 [2024-10-08 09:23:17.728425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.207 [2024-10-08 09:23:17.728432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:26.207 [2024-10-08 09:23:17.728442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.207 [2024-10-08 09:23:17.728457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:26.207 [2024-10-08 09:23:17.728465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728473] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.207 [2024-10-08 09:23:17.728492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.207 [2024-10-08 09:23:17.728501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.207 [2024-10-08 09:23:17.728508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.207 [2024-10-08 09:23:17.728517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.207 [2024-10-08 09:23:17.728524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.207 [2024-10-08 09:23:17.728532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.207 [2024-10-08 09:23:17.728538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.207 [2024-10-08 09:23:17.728546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.207 [2024-10-08 09:23:17.728553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.207 [2024-10-08 09:23:17.728563] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.207 [2024-10-08 09:23:17.728572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.207 [2024-10-08 09:23:17.728584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:26.207 [2024-10-08 09:23:17.728591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:26.207 [2024-10-08 09:23:17.728602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:26.207 [2024-10-08 09:23:17.728609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:26.207 [2024-10-08 09:23:17.728618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:26.207 [2024-10-08 09:23:17.728626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:26.207 [2024-10-08 09:23:17.728635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:26.207 [2024-10-08 09:23:17.728642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:26.207 [2024-10-08 09:23:17.728651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:26.207 [2024-10-08 09:23:17.728658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:26.207 [2024-10-08 09:23:17.728667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:26.207 [2024-10-08 09:23:17.728674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:26.207 [2024-10-08 09:23:17.728682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:26.207 [2024-10-08 09:23:17.728689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:26.207 [2024-10-08 09:23:17.728698] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.207 [2024-10-08 09:23:17.728706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.207 [2024-10-08 09:23:17.728720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.207 [2024-10-08 09:23:17.728727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.207 [2024-10-08 09:23:17.728736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.207 [2024-10-08 09:23:17.728744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.207 [2024-10-08 09:23:17.728752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.207 [2024-10-08 09:23:17.728761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.207 [2024-10-08 09:23:17.728771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:17:26.207 [2024-10-08 09:23:17.728777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.207 [2024-10-08 09:23:17.757758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.207 [2024-10-08 09:23:17.757794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.207 [2024-10-08 09:23:17.757807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.920 ms 00:17:26.207 [2024-10-08 09:23:17.757815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.207 [2024-10-08 09:23:17.757937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.207 [2024-10-08 09:23:17.757947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:26.207 [2024-10-08 09:23:17.757957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:26.207 [2024-10-08 09:23:17.757965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.207 [2024-10-08 09:23:17.798033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.207 [2024-10-08 09:23:17.798073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.207 [2024-10-08 09:23:17.798089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.042 ms 00:17:26.207 [2024-10-08 09:23:17.798097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.207 [2024-10-08 09:23:17.798193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.207 [2024-10-08 09:23:17.798205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.207 [2024-10-08 09:23:17.798216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:26.207 [2024-10-08 09:23:17.798225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.207 [2024-10-08 09:23:17.798675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.207 [2024-10-08 09:23:17.798691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.207 [2024-10-08 09:23:17.798702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:17:26.208 [2024-10-08 09:23:17.798710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.208 [2024-10-08 09:23:17.798846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.208 [2024-10-08 09:23:17.798856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.208 [2024-10-08 09:23:17.798866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:17:26.208 [2024-10-08 09:23:17.798874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.208 [2024-10-08 09:23:17.814833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.208 [2024-10-08 09:23:17.814866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.208 [2024-10-08 09:23:17.814881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.932 ms 00:17:26.208 [2024-10-08 09:23:17.814893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.208 [2024-10-08 09:23:17.827801] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:26.208 [2024-10-08 09:23:17.827835] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:26.208 [2024-10-08 09:23:17.827848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.208 [2024-10-08 09:23:17.827857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:26.208 [2024-10-08 09:23:17.827868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.840 ms 00:17:26.208 [2024-10-08 09:23:17.827876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.208 [2024-10-08 09:23:17.852889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.208 [2024-10-08 09:23:17.852933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:26.208 [2024-10-08 09:23:17.852948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.935 ms 00:17:26.208 [2024-10-08 09:23:17.852962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.208 [2024-10-08 09:23:17.864727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.208 [2024-10-08 09:23:17.864760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:26.208 [2024-10-08 09:23:17.864775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.681 ms 00:17:26.208 [2024-10-08 09:23:17.864782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.208 [2024-10-08 09:23:17.876132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.208 [2024-10-08 09:23:17.876164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:26.208 [2024-10-08 09:23:17.876176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.282 ms 00:17:26.208 [2024-10-08 09:23:17.876183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.208 [2024-10-08 09:23:17.876820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.208 [2024-10-08 09:23:17.876845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:26.208 [2024-10-08 09:23:17.876856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:17:26.208 [2024-10-08 09:23:17.876864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.466 [2024-10-08 09:23:17.936636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.466 [2024-10-08 09:23:17.936689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:26.466 [2024-10-08 09:23:17.936705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.744 ms 00:17:26.466 [2024-10-08 09:23:17.936716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.466 [2024-10-08 09:23:17.947562] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:26.466 [2024-10-08 09:23:17.964484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.466 [2024-10-08 09:23:17.964536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.466 [2024-10-08 09:23:17.964548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.660 ms 00:17:26.467 [2024-10-08 09:23:17.964558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.467 [2024-10-08 09:23:17.964661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.467 [2024-10-08 09:23:17.964675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:26.467 [2024-10-08 09:23:17.964684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:26.467 [2024-10-08 09:23:17.964694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.467 [2024-10-08 09:23:17.964751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.467 [2024-10-08 09:23:17.964762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:26.467 [2024-10-08 09:23:17.964770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:26.467 [2024-10-08 09:23:17.964780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.467 [2024-10-08 09:23:17.964807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.467 [2024-10-08 09:23:17.964818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:26.467 [2024-10-08 09:23:17.964826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:26.467 [2024-10-08 09:23:17.964842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.467 [2024-10-08 09:23:17.964877] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:26.467 [2024-10-08 09:23:17.964892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.467 [2024-10-08 09:23:17.964900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:26.467 [2024-10-08 09:23:17.964910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:26.467 [2024-10-08 09:23:17.964917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.467 [2024-10-08 09:23:17.989028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.467 [2024-10-08 09:23:17.989187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:26.467 [2024-10-08 09:23:17.989211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.085 ms 00:17:26.467 [2024-10-08 09:23:17.989220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.467 [2024-10-08 09:23:17.989314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.467 [2024-10-08 09:23:17.989326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:26.467 [2024-10-08 09:23:17.989337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:26.467 [2024-10-08 09:23:17.989345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.467 [2024-10-08 09:23:17.990286] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.467 [2024-10-08 09:23:17.993311] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 292.085 ms, result 0 00:17:26.467 [2024-10-08 09:23:17.994362] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.467 Some configs were skipped because the RPC state that can call them passed over. 00:17:26.467 09:23:18 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:26.725 [2024-10-08 09:23:18.225254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.725 [2024-10-08 09:23:18.225458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:26.725 [2024-10-08 09:23:18.225521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.902 ms 00:17:26.725 [2024-10-08 09:23:18.225548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.725 [2024-10-08 09:23:18.225622] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.272 ms, result 0 00:17:26.725 true 00:17:26.725 09:23:18 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:26.983 [2024-10-08 09:23:18.434040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.983 [2024-10-08 09:23:18.434252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:26.983 [2024-10-08 09:23:18.434316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.184 ms 00:17:26.983 [2024-10-08 09:23:18.434340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.983 [2024-10-08 09:23:18.434410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.550 ms, result 0 00:17:26.983 true 00:17:26.983 09:23:18 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 74191 00:17:26.983 09:23:18 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74191 ']' 00:17:26.983 09:23:18 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74191 00:17:26.983 09:23:18 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:26.983 09:23:18 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:26.983 09:23:18 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74191 00:17:26.983 killing process with pid 74191 00:17:26.983 09:23:18 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:26.983 09:23:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:26.983 09:23:18 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74191' 00:17:26.983 09:23:18 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74191 00:17:26.983 09:23:18 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74191 00:17:27.575 [2024-10-08 09:23:19.129736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.575 [2024-10-08 09:23:19.129800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:27.575 [2024-10-08 09:23:19.129812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:27.575 [2024-10-08 09:23:19.129821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.575 [2024-10-08 09:23:19.129840] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:27.575 [2024-10-08 09:23:19.132059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.575 [2024-10-08 09:23:19.132088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:27.575 [2024-10-08 09:23:19.132100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:17:27.575 [2024-10-08 09:23:19.132106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.575 [2024-10-08 09:23:19.132337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.575 [2024-10-08 09:23:19.132345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:27.575 [2024-10-08 09:23:19.132354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:17:27.575 [2024-10-08 09:23:19.132361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.575 [2024-10-08 09:23:19.135741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.575 [2024-10-08 09:23:19.135768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:27.575 [2024-10-08 09:23:19.135777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.363 ms 00:17:27.575 [2024-10-08 09:23:19.135784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.575 [2024-10-08 09:23:19.141044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.575 [2024-10-08 09:23:19.141070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:27.575 [2024-10-08 09:23:19.141082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.228 ms 00:17:27.575 [2024-10-08 09:23:19.141091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.575 [2024-10-08 09:23:19.148877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.575 [2024-10-08 09:23:19.148902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:27.575 [2024-10-08 09:23:19.148913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.742 ms 00:17:27.575 [2024-10-08 09:23:19.148920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.575 [2024-10-08 09:23:19.155819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.575 [2024-10-08 09:23:19.155846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:27.575 [2024-10-08 09:23:19.155856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.865 ms 00:17:27.575 [2024-10-08 09:23:19.155869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.575 [2024-10-08 09:23:19.155979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.575 [2024-10-08 09:23:19.155988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:27.575 [2024-10-08 09:23:19.155996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:27.575 [2024-10-08 09:23:19.156004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.575 [2024-10-08 09:23:19.163869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.575 [2024-10-08 09:23:19.163893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:27.575 [2024-10-08 09:23:19.163902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.847 ms 00:17:27.575 [2024-10-08 09:23:19.163909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.575 [2024-10-08 09:23:19.171606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.575 [2024-10-08 09:23:19.171631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:27.575 [2024-10-08 09:23:19.171644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.665 ms 00:17:27.575 [2024-10-08 09:23:19.171650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.575 [2024-10-08 09:23:19.178759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.575 [2024-10-08 09:23:19.178929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:27.575 [2024-10-08 09:23:19.178946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.078 ms 00:17:27.575 [2024-10-08 09:23:19.178952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.575 [2024-10-08 09:23:19.186103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.575 [2024-10-08 09:23:19.186207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:27.575 [2024-10-08 09:23:19.186222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.095 ms 00:17:27.575 [2024-10-08 09:23:19.186227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.575 [2024-10-08 09:23:19.186263] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:27.575 [2024-10-08 09:23:19.186278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:27.575 [2024-10-08 09:23:19.186620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:27.576 [2024-10-08 09:23:19.186978] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:27.576 [2024-10-08 09:23:19.186987] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa396233-677a-41d2-8a2d-a8108e4f192f 00:17:27.576 [2024-10-08 09:23:19.186993] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:27.576 [2024-10-08 09:23:19.187002] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:27.576 [2024-10-08 09:23:19.187008] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:27.576 [2024-10-08 09:23:19.187015] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:27.576 [2024-10-08 09:23:19.187026] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:27.576 [2024-10-08 09:23:19.187034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:27.576 [2024-10-08 09:23:19.187041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:27.576 [2024-10-08 09:23:19.187048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:27.576 [2024-10-08 09:23:19.187053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:27.576 [2024-10-08 09:23:19.187060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.576 [2024-10-08 09:23:19.187067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:27.576 [2024-10-08 09:23:19.187075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:17:27.576 [2024-10-08 09:23:19.187080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.576 [2024-10-08 09:23:19.197784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.576 [2024-10-08 09:23:19.197904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:27.576 [2024-10-08 09:23:19.197962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.682 ms 00:17:27.576 [2024-10-08 09:23:19.197981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.576 [2024-10-08 09:23:19.198319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.576 [2024-10-08 09:23:19.198399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:27.576 [2024-10-08 09:23:19.198445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:17:27.576 [2024-10-08 09:23:19.198464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.576 [2024-10-08 09:23:19.230929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.576 [2024-10-08 09:23:19.231040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.576 [2024-10-08 09:23:19.231085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.576 [2024-10-08 09:23:19.231105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.576 [2024-10-08 09:23:19.231216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.576 [2024-10-08 09:23:19.231488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.576 [2024-10-08 09:23:19.231532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.576 [2024-10-08 09:23:19.231551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.576 [2024-10-08 09:23:19.231728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.576 [2024-10-08 09:23:19.231752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.576 [2024-10-08 09:23:19.231772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.576 [2024-10-08 09:23:19.231787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.576 [2024-10-08 09:23:19.231855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.576 [2024-10-08 09:23:19.231874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.576 [2024-10-08 09:23:19.231891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.576 [2024-10-08 09:23:19.231906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.835 [2024-10-08 09:23:19.294306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.835 [2024-10-08 09:23:19.294472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:27.835 [2024-10-08 09:23:19.294520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.835 [2024-10-08 09:23:19.294540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.835 [2024-10-08 09:23:19.346145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.835 [2024-10-08 09:23:19.346318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:27.835 [2024-10-08 09:23:19.346363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.835 [2024-10-08 09:23:19.346381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.835 [2024-10-08 09:23:19.346514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.835 [2024-10-08 09:23:19.346575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.835 [2024-10-08 09:23:19.346598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.835 [2024-10-08 09:23:19.346613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.835 [2024-10-08 09:23:19.346672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.835 [2024-10-08 09:23:19.346694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.835 [2024-10-08 09:23:19.346740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.835 [2024-10-08 09:23:19.346757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.835 [2024-10-08 09:23:19.346853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.835 [2024-10-08 09:23:19.347032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.835 [2024-10-08 09:23:19.347085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.835 [2024-10-08 09:23:19.347103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.835 [2024-10-08 09:23:19.347147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.835 [2024-10-08 09:23:19.347198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:27.835 [2024-10-08 09:23:19.347222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.835 [2024-10-08 09:23:19.347237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.835 [2024-10-08 09:23:19.347287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.835 [2024-10-08 09:23:19.347482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.835 [2024-10-08 09:23:19.347516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.835 [2024-10-08 09:23:19.347532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.835 [2024-10-08 09:23:19.347593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.835 [2024-10-08 09:23:19.347665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.835 [2024-10-08 09:23:19.347687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.835 [2024-10-08 09:23:19.347702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.835 [2024-10-08 09:23:19.347840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 218.082 ms, result 0 00:17:28.402 09:23:19 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:28.402 09:23:19 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:28.402 [2024-10-08 09:23:20.059543] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:17:28.402 [2024-10-08 09:23:20.059671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74244 ] 00:17:28.659 [2024-10-08 09:23:20.206456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.926 [2024-10-08 09:23:20.381332] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.201 [2024-10-08 09:23:20.610153] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.201 [2024-10-08 09:23:20.610214] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.201 [2024-10-08 09:23:20.764207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.201 [2024-10-08 09:23:20.764435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:29.201 [2024-10-08 09:23:20.764458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:29.201 [2024-10-08 09:23:20.764465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.201 [2024-10-08 09:23:20.766660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.201 [2024-10-08 09:23:20.766690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.201 [2024-10-08 09:23:20.766698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:17:29.201 [2024-10-08 09:23:20.766704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.201 [2024-10-08 09:23:20.766770] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:29.201 [2024-10-08 09:23:20.767291] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:29.201 [2024-10-08 09:23:20.767307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.201 [2024-10-08 09:23:20.767314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.201 [2024-10-08 09:23:20.767323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:17:29.201 [2024-10-08 09:23:20.767329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.201 [2024-10-08 09:23:20.768667] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:29.201 [2024-10-08 09:23:20.778721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.201 [2024-10-08 09:23:20.778752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:29.201 [2024-10-08 09:23:20.778762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.056 ms 00:17:29.201 [2024-10-08 09:23:20.778769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.201 [2024-10-08 09:23:20.778843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.201 [2024-10-08 09:23:20.778853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:29.201 [2024-10-08 09:23:20.778863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:29.201 [2024-10-08 09:23:20.778869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.201 [2024-10-08 09:23:20.785069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.201 [2024-10-08 09:23:20.785096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.201 [2024-10-08 09:23:20.785103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.166 ms 00:17:29.201 [2024-10-08 09:23:20.785110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.201 [2024-10-08 09:23:20.785191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.201 [2024-10-08 09:23:20.785202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.201 [2024-10-08 09:23:20.785208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:29.201 [2024-10-08 09:23:20.785214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.201 [2024-10-08 09:23:20.785233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.201 [2024-10-08 09:23:20.785240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:29.201 [2024-10-08 09:23:20.785247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:29.201 [2024-10-08 09:23:20.785252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.201 [2024-10-08 09:23:20.785269] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:29.201 [2024-10-08 09:23:20.788418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.201 [2024-10-08 09:23:20.788442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.201 [2024-10-08 09:23:20.788450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.153 ms 00:17:29.201 [2024-10-08 09:23:20.788456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.201 [2024-10-08 09:23:20.788486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.201 [2024-10-08 09:23:20.788497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:29.201 [2024-10-08 09:23:20.788503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:29.201 [2024-10-08 09:23:20.788510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.201 [2024-10-08 09:23:20.788524] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:29.201 [2024-10-08 09:23:20.788540] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:29.201 [2024-10-08 09:23:20.788570] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:29.201 [2024-10-08 09:23:20.788582] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:29.201 [2024-10-08 09:23:20.788668] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:29.201 [2024-10-08 09:23:20.788677] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:29.201 [2024-10-08 09:23:20.788687] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:29.201 [2024-10-08 09:23:20.788695] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:29.201 [2024-10-08 09:23:20.788702] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:29.201 [2024-10-08 09:23:20.788709] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:29.201 [2024-10-08 09:23:20.788716] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:29.201 [2024-10-08 09:23:20.788722] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:29.201 [2024-10-08 09:23:20.788727] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:29.201 [2024-10-08 09:23:20.788733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.201 [2024-10-08 09:23:20.788739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:29.201 [2024-10-08 09:23:20.788749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:17:29.201 [2024-10-08 09:23:20.788754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.201 [2024-10-08 09:23:20.788823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.201 [2024-10-08 09:23:20.788830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:29.201 [2024-10-08 09:23:20.788836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:29.201 [2024-10-08 09:23:20.788842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.201 [2024-10-08 09:23:20.788930] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:29.201 [2024-10-08 09:23:20.788938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:29.201 [2024-10-08 09:23:20.788944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.201 [2024-10-08 09:23:20.788952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.201 [2024-10-08 09:23:20.788959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:29.201 [2024-10-08 09:23:20.788964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:29.201 [2024-10-08 09:23:20.788970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:29.201 [2024-10-08 09:23:20.788976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:29.201 [2024-10-08 09:23:20.788983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:29.201 [2024-10-08 09:23:20.788988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.201 [2024-10-08 09:23:20.788994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:29.201 [2024-10-08 09:23:20.789004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:29.201 [2024-10-08 09:23:20.789009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.201 [2024-10-08 09:23:20.789016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:29.201 [2024-10-08 09:23:20.789022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:29.201 [2024-10-08 09:23:20.789030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.201 [2024-10-08 09:23:20.789035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:29.201 [2024-10-08 09:23:20.789041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:29.201 [2024-10-08 09:23:20.789046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.201 [2024-10-08 09:23:20.789051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:29.201 [2024-10-08 09:23:20.789057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:29.201 [2024-10-08 09:23:20.789062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.201 [2024-10-08 09:23:20.789069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:29.201 [2024-10-08 09:23:20.789075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:29.201 [2024-10-08 09:23:20.789080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.201 [2024-10-08 09:23:20.789085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:29.201 [2024-10-08 09:23:20.789091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:29.201 [2024-10-08 09:23:20.789096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.201 [2024-10-08 09:23:20.789101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:29.201 [2024-10-08 09:23:20.789106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:29.202 [2024-10-08 09:23:20.789112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.202 [2024-10-08 09:23:20.789117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:29.202 [2024-10-08 09:23:20.789122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:29.202 [2024-10-08 09:23:20.789127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.202 [2024-10-08 09:23:20.789133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:29.202 [2024-10-08 09:23:20.789138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:29.202 [2024-10-08 09:23:20.789143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.202 [2024-10-08 09:23:20.789148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:29.202 [2024-10-08 09:23:20.789154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:29.202 [2024-10-08 09:23:20.789159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.202 [2024-10-08 09:23:20.789165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:29.202 [2024-10-08 09:23:20.789170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:29.202 [2024-10-08 09:23:20.789175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.202 [2024-10-08 09:23:20.789180] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:29.202 [2024-10-08 09:23:20.789187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:29.202 [2024-10-08 09:23:20.789193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.202 [2024-10-08 09:23:20.789199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.202 [2024-10-08 09:23:20.789206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:29.202 [2024-10-08 09:23:20.789211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:29.202 [2024-10-08 09:23:20.789217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:29.202 [2024-10-08 09:23:20.789222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:29.202 [2024-10-08 09:23:20.789227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:29.202 [2024-10-08 09:23:20.789233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:29.202 [2024-10-08 09:23:20.789241] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:29.202 [2024-10-08 09:23:20.789248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.202 [2024-10-08 09:23:20.789257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:29.202 [2024-10-08 09:23:20.789263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:29.202 [2024-10-08 09:23:20.789268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:29.202 [2024-10-08 09:23:20.789274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:29.202 [2024-10-08 09:23:20.789279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:29.202 [2024-10-08 09:23:20.789285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:29.202 [2024-10-08 09:23:20.789291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:29.202 [2024-10-08 09:23:20.789296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:29.202 [2024-10-08 09:23:20.789302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:29.202 [2024-10-08 09:23:20.789307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:29.202 [2024-10-08 09:23:20.789312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:29.202 [2024-10-08 09:23:20.789317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:29.202 [2024-10-08 09:23:20.789323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:29.202 [2024-10-08 09:23:20.789328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:29.202 [2024-10-08 09:23:20.789334] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:29.202 [2024-10-08 09:23:20.789341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.202 [2024-10-08 09:23:20.789349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:29.202 [2024-10-08 09:23:20.789354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:29.202 [2024-10-08 09:23:20.789360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:29.202 [2024-10-08 09:23:20.789366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:29.202 [2024-10-08 09:23:20.789371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.202 [2024-10-08 09:23:20.789379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:29.202 [2024-10-08 09:23:20.789384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:17:29.202 [2024-10-08 09:23:20.789400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.202 [2024-10-08 09:23:20.829698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.202 [2024-10-08 09:23:20.829743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.202 [2024-10-08 09:23:20.829756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.242 ms 00:17:29.202 [2024-10-08 09:23:20.829764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.202 [2024-10-08 09:23:20.829907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.202 [2024-10-08 09:23:20.829919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:29.202 [2024-10-08 09:23:20.829929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:29.202 [2024-10-08 09:23:20.829936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.202 [2024-10-08 09:23:20.856556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.202 [2024-10-08 09:23:20.856731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.202 [2024-10-08 09:23:20.856746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.599 ms 00:17:29.202 [2024-10-08 09:23:20.856753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.202 [2024-10-08 09:23:20.856830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.202 [2024-10-08 09:23:20.856839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.202 [2024-10-08 09:23:20.856846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:29.202 [2024-10-08 09:23:20.856852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.202 [2024-10-08 09:23:20.857232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.202 [2024-10-08 09:23:20.857245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.202 [2024-10-08 09:23:20.857253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:17:29.202 [2024-10-08 09:23:20.857260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.202 [2024-10-08 09:23:20.857373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.202 [2024-10-08 09:23:20.857381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.202 [2024-10-08 09:23:20.857406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:29.202 [2024-10-08 09:23:20.857414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.202 [2024-10-08 09:23:20.868845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.202 [2024-10-08 09:23:20.868870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.202 [2024-10-08 09:23:20.868878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.413 ms 00:17:29.202 [2024-10-08 09:23:20.868885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.202 [2024-10-08 09:23:20.879344] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:29.202 [2024-10-08 09:23:20.879414] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:29.202 [2024-10-08 09:23:20.879425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.202 [2024-10-08 09:23:20.879431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:29.202 [2024-10-08 09:23:20.879439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.454 ms 00:17:29.202 [2024-10-08 09:23:20.879445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:20.898164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.461 [2024-10-08 09:23:20.898192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:29.461 [2024-10-08 09:23:20.898206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.659 ms 00:17:29.461 [2024-10-08 09:23:20.898214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:20.907055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.461 [2024-10-08 09:23:20.907082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:29.461 [2024-10-08 09:23:20.907090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.783 ms 00:17:29.461 [2024-10-08 09:23:20.907096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:20.915803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.461 [2024-10-08 09:23:20.915828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:29.461 [2024-10-08 09:23:20.915836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.664 ms 00:17:29.461 [2024-10-08 09:23:20.915842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:20.916318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.461 [2024-10-08 09:23:20.916334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:29.461 [2024-10-08 09:23:20.916342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:17:29.461 [2024-10-08 09:23:20.916348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:20.964665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.461 [2024-10-08 09:23:20.964702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:29.461 [2024-10-08 09:23:20.964714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.299 ms 00:17:29.461 [2024-10-08 09:23:20.964721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:20.972977] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:29.461 [2024-10-08 09:23:20.988150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.461 [2024-10-08 09:23:20.988186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:29.461 [2024-10-08 09:23:20.988198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.333 ms 00:17:29.461 [2024-10-08 09:23:20.988205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:20.988288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.461 [2024-10-08 09:23:20.988298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:29.461 [2024-10-08 09:23:20.988306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:29.461 [2024-10-08 09:23:20.988312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:20.988362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.461 [2024-10-08 09:23:20.988371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:29.461 [2024-10-08 09:23:20.988377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:29.461 [2024-10-08 09:23:20.988384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:20.988429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.461 [2024-10-08 09:23:20.988437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:29.461 [2024-10-08 09:23:20.988444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:29.461 [2024-10-08 09:23:20.988451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:20.988478] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:29.461 [2024-10-08 09:23:20.988486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.461 [2024-10-08 09:23:20.988492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:29.461 [2024-10-08 09:23:20.988500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:29.461 [2024-10-08 09:23:20.988507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:21.007207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.461 [2024-10-08 09:23:21.007236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:29.461 [2024-10-08 09:23:21.007245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.681 ms 00:17:29.461 [2024-10-08 09:23:21.007252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:21.007330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.461 [2024-10-08 09:23:21.007341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:29.461 [2024-10-08 09:23:21.007349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:29.461 [2024-10-08 09:23:21.007362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.461 [2024-10-08 09:23:21.008463] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.461 [2024-10-08 09:23:21.010814] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 243.970 ms, result 0 00:17:29.461 [2024-10-08 09:23:21.011653] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.461 [2024-10-08 09:23:21.022603] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:30.395  [2024-10-08T09:23:23.453Z] Copying: 45/256 [MB] (45 MBps) [2024-10-08T09:23:24.386Z] Copying: 87/256 [MB] (41 MBps) [2024-10-08T09:23:25.318Z] Copying: 130/256 [MB] (43 MBps) [2024-10-08T09:23:26.251Z] Copying: 173/256 [MB] (43 MBps) [2024-10-08T09:23:27.185Z] Copying: 217/256 [MB] (43 MBps) [2024-10-08T09:23:27.185Z] Copying: 256/256 [MB] (average 43 MBps)[2024-10-08 09:23:26.927493] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:35.502 [2024-10-08 09:23:26.937454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.502 [2024-10-08 09:23:26.937600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:35.502 [2024-10-08 09:23:26.937621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:35.502 [2024-10-08 09:23:26.937630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.502 [2024-10-08 09:23:26.937655] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:35.502 [2024-10-08 09:23:26.940598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.502 [2024-10-08 09:23:26.940627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:35.502 [2024-10-08 09:23:26.940639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.929 ms 00:17:35.502 [2024-10-08 09:23:26.940646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.502 [2024-10-08 09:23:26.940911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.502 [2024-10-08 09:23:26.940926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:35.502 [2024-10-08 09:23:26.940938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:17:35.502 [2024-10-08 09:23:26.940946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.502 [2024-10-08 09:23:26.944640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.502 [2024-10-08 09:23:26.944659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:35.502 [2024-10-08 09:23:26.944668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.680 ms 00:17:35.502 [2024-10-08 09:23:26.944676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.502 [2024-10-08 09:23:26.952058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.502 [2024-10-08 09:23:26.952173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:35.502 [2024-10-08 09:23:26.952193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.366 ms 00:17:35.502 [2024-10-08 09:23:26.952201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.502 [2024-10-08 09:23:26.975156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.502 [2024-10-08 09:23:26.975267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:35.502 [2024-10-08 09:23:26.975327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.901 ms 00:17:35.502 [2024-10-08 09:23:26.975349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.502 [2024-10-08 09:23:26.990192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.502 [2024-10-08 09:23:26.990305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:35.502 [2024-10-08 09:23:26.990365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.784 ms 00:17:35.502 [2024-10-08 09:23:26.990402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.502 [2024-10-08 09:23:26.990543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.502 [2024-10-08 09:23:26.990568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:35.502 [2024-10-08 09:23:26.990589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:35.502 [2024-10-08 09:23:26.990633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.502 [2024-10-08 09:23:27.014115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.502 [2024-10-08 09:23:27.014229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:35.502 [2024-10-08 09:23:27.014281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.424 ms 00:17:35.502 [2024-10-08 09:23:27.014302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.502 [2024-10-08 09:23:27.036844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.502 [2024-10-08 09:23:27.036946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:35.502 [2024-10-08 09:23:27.036997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.508 ms 00:17:35.502 [2024-10-08 09:23:27.037019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.502 [2024-10-08 09:23:27.058829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.502 [2024-10-08 09:23:27.058930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:35.502 [2024-10-08 09:23:27.058980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.778 ms 00:17:35.502 [2024-10-08 09:23:27.059001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.502 [2024-10-08 09:23:27.081368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.502 [2024-10-08 09:23:27.081439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:35.502 [2024-10-08 09:23:27.081465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.309 ms 00:17:35.502 [2024-10-08 09:23:27.081485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.502 [2024-10-08 09:23:27.081517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:35.502 [2024-10-08 09:23:27.081545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.081660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.081694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.081724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.081810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.081841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.081871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.081938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.081994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.082992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.083982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:35.503 [2024-10-08 09:23:27.084728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:35.504 [2024-10-08 09:23:27.084736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:35.504 [2024-10-08 09:23:27.084745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:35.504 [2024-10-08 09:23:27.084752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:35.504 [2024-10-08 09:23:27.084759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:35.504 [2024-10-08 09:23:27.084767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:35.504 [2024-10-08 09:23:27.084782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:35.504 [2024-10-08 09:23:27.084799] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:35.504 [2024-10-08 09:23:27.084808] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa396233-677a-41d2-8a2d-a8108e4f192f 00:17:35.504 [2024-10-08 09:23:27.084816] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:35.504 [2024-10-08 09:23:27.084823] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:35.504 [2024-10-08 09:23:27.084830] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:35.504 [2024-10-08 09:23:27.084840] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:35.504 [2024-10-08 09:23:27.084848] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:35.504 [2024-10-08 09:23:27.084856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:35.504 [2024-10-08 09:23:27.084863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:35.504 [2024-10-08 09:23:27.084869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:35.504 [2024-10-08 09:23:27.084875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:35.504 [2024-10-08 09:23:27.084882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.504 [2024-10-08 09:23:27.084890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:35.504 [2024-10-08 09:23:27.084899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.367 ms 00:17:35.504 [2024-10-08 09:23:27.084906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.504 [2024-10-08 09:23:27.097745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.504 [2024-10-08 09:23:27.097845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:35.504 [2024-10-08 09:23:27.097896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.804 ms 00:17:35.504 [2024-10-08 09:23:27.097919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.504 [2024-10-08 09:23:27.098319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.504 [2024-10-08 09:23:27.098410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:35.504 [2024-10-08 09:23:27.098466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:17:35.504 [2024-10-08 09:23:27.098490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.504 [2024-10-08 09:23:27.130829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.504 [2024-10-08 09:23:27.130948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.504 [2024-10-08 09:23:27.131013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.504 [2024-10-08 09:23:27.131036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.504 [2024-10-08 09:23:27.131119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.504 [2024-10-08 09:23:27.131213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.504 [2024-10-08 09:23:27.131272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.504 [2024-10-08 09:23:27.131294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.504 [2024-10-08 09:23:27.131349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.504 [2024-10-08 09:23:27.131632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.504 [2024-10-08 09:23:27.131680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.504 [2024-10-08 09:23:27.131703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.504 [2024-10-08 09:23:27.131792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.504 [2024-10-08 09:23:27.131818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.504 [2024-10-08 09:23:27.131838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.504 [2024-10-08 09:23:27.131856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.762 [2024-10-08 09:23:27.211571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.762 [2024-10-08 09:23:27.211760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.763 [2024-10-08 09:23:27.211875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.763 [2024-10-08 09:23:27.211907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.763 [2024-10-08 09:23:27.278234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.763 [2024-10-08 09:23:27.278427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.763 [2024-10-08 09:23:27.278489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.763 [2024-10-08 09:23:27.278513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.763 [2024-10-08 09:23:27.278588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.763 [2024-10-08 09:23:27.278611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.763 [2024-10-08 09:23:27.278632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.763 [2024-10-08 09:23:27.278686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.763 [2024-10-08 09:23:27.278733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.763 [2024-10-08 09:23:27.278754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.763 [2024-10-08 09:23:27.278774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.763 [2024-10-08 09:23:27.278792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.763 [2024-10-08 09:23:27.278905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.763 [2024-10-08 09:23:27.279011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.763 [2024-10-08 09:23:27.279031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.763 [2024-10-08 09:23:27.279054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.763 [2024-10-08 09:23:27.279100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.763 [2024-10-08 09:23:27.279122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:35.763 [2024-10-08 09:23:27.279190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.763 [2024-10-08 09:23:27.279212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.763 [2024-10-08 09:23:27.279269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.763 [2024-10-08 09:23:27.279280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.763 [2024-10-08 09:23:27.279288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.763 [2024-10-08 09:23:27.279296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.763 [2024-10-08 09:23:27.279348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.763 [2024-10-08 09:23:27.279358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.763 [2024-10-08 09:23:27.279374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.763 [2024-10-08 09:23:27.279382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.763 [2024-10-08 09:23:27.279549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.078 ms, result 0 00:17:36.696 00:17:36.696 00:17:36.696 09:23:28 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:36.696 09:23:28 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:36.954 09:23:28 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:37.212 [2024-10-08 09:23:28.668087] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:17:37.212 [2024-10-08 09:23:28.668602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74337 ] 00:17:37.212 [2024-10-08 09:23:28.815288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.470 [2024-10-08 09:23:28.991740] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.729 [2024-10-08 09:23:29.221198] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:37.729 [2024-10-08 09:23:29.221427] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:37.729 [2024-10-08 09:23:29.375101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.729 [2024-10-08 09:23:29.375275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:37.729 [2024-10-08 09:23:29.375334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:37.729 [2024-10-08 09:23:29.375355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.729 [2024-10-08 09:23:29.377586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.729 [2024-10-08 09:23:29.377698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:37.729 [2024-10-08 09:23:29.377749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.186 ms 00:17:37.729 [2024-10-08 09:23:29.377758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.729 [2024-10-08 09:23:29.377826] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:37.729 [2024-10-08 09:23:29.378415] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:37.729 [2024-10-08 09:23:29.378432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.729 [2024-10-08 09:23:29.378439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:37.729 [2024-10-08 09:23:29.378449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:17:37.729 [2024-10-08 09:23:29.378455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.729 [2024-10-08 09:23:29.379764] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:37.729 [2024-10-08 09:23:29.389932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.729 [2024-10-08 09:23:29.389962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:37.729 [2024-10-08 09:23:29.389972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.168 ms 00:17:37.729 [2024-10-08 09:23:29.389978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.729 [2024-10-08 09:23:29.390061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.729 [2024-10-08 09:23:29.390069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:37.729 [2024-10-08 09:23:29.390079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:37.729 [2024-10-08 09:23:29.390085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.729 [2024-10-08 09:23:29.396489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.729 [2024-10-08 09:23:29.396598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:37.729 [2024-10-08 09:23:29.396610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.371 ms 00:17:37.729 [2024-10-08 09:23:29.396617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.729 [2024-10-08 09:23:29.396696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.729 [2024-10-08 09:23:29.396707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:37.729 [2024-10-08 09:23:29.396714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:37.729 [2024-10-08 09:23:29.396719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.729 [2024-10-08 09:23:29.396737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.729 [2024-10-08 09:23:29.396743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:37.729 [2024-10-08 09:23:29.396749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:37.729 [2024-10-08 09:23:29.396755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.729 [2024-10-08 09:23:29.396776] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:37.729 [2024-10-08 09:23:29.399844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.729 [2024-10-08 09:23:29.399938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:37.729 [2024-10-08 09:23:29.399950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.073 ms 00:17:37.729 [2024-10-08 09:23:29.399957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.729 [2024-10-08 09:23:29.399991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.729 [2024-10-08 09:23:29.400001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:37.729 [2024-10-08 09:23:29.400008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:37.729 [2024-10-08 09:23:29.400014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.729 [2024-10-08 09:23:29.400038] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:37.730 [2024-10-08 09:23:29.400054] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:37.730 [2024-10-08 09:23:29.400083] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:37.730 [2024-10-08 09:23:29.400095] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:37.730 [2024-10-08 09:23:29.400179] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:37.730 [2024-10-08 09:23:29.400188] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:37.730 [2024-10-08 09:23:29.400196] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:37.730 [2024-10-08 09:23:29.400204] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:37.730 [2024-10-08 09:23:29.400212] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:37.730 [2024-10-08 09:23:29.400218] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:37.730 [2024-10-08 09:23:29.400224] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:37.730 [2024-10-08 09:23:29.400230] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:37.730 [2024-10-08 09:23:29.400235] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:37.730 [2024-10-08 09:23:29.400242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.730 [2024-10-08 09:23:29.400248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:37.730 [2024-10-08 09:23:29.400256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:17:37.730 [2024-10-08 09:23:29.400262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.730 [2024-10-08 09:23:29.400330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.730 [2024-10-08 09:23:29.400337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:37.730 [2024-10-08 09:23:29.400343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:37.730 [2024-10-08 09:23:29.400349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.730 [2024-10-08 09:23:29.400438] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:37.730 [2024-10-08 09:23:29.400446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:37.730 [2024-10-08 09:23:29.400453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:37.730 [2024-10-08 09:23:29.400461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:37.730 [2024-10-08 09:23:29.400473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:37.730 [2024-10-08 09:23:29.400483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:37.730 [2024-10-08 09:23:29.400490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:37.730 [2024-10-08 09:23:29.400501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:37.730 [2024-10-08 09:23:29.400511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:37.730 [2024-10-08 09:23:29.400516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:37.730 [2024-10-08 09:23:29.400521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:37.730 [2024-10-08 09:23:29.400526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:37.730 [2024-10-08 09:23:29.400533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:37.730 [2024-10-08 09:23:29.400545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:37.730 [2024-10-08 09:23:29.400549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:37.730 [2024-10-08 09:23:29.400560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.730 [2024-10-08 09:23:29.400570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:37.730 [2024-10-08 09:23:29.400575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.730 [2024-10-08 09:23:29.400585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:37.730 [2024-10-08 09:23:29.400591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.730 [2024-10-08 09:23:29.400601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:37.730 [2024-10-08 09:23:29.400606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.730 [2024-10-08 09:23:29.400616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:37.730 [2024-10-08 09:23:29.400622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:37.730 [2024-10-08 09:23:29.400632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:37.730 [2024-10-08 09:23:29.400637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:37.730 [2024-10-08 09:23:29.400642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:37.730 [2024-10-08 09:23:29.400647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:37.730 [2024-10-08 09:23:29.400652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:37.730 [2024-10-08 09:23:29.400657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:37.730 [2024-10-08 09:23:29.400668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:37.730 [2024-10-08 09:23:29.400673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400679] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:37.730 [2024-10-08 09:23:29.400685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:37.730 [2024-10-08 09:23:29.400691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:37.730 [2024-10-08 09:23:29.400696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.730 [2024-10-08 09:23:29.400703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:37.730 [2024-10-08 09:23:29.400708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:37.730 [2024-10-08 09:23:29.400713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:37.730 [2024-10-08 09:23:29.400718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:37.730 [2024-10-08 09:23:29.400723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:37.730 [2024-10-08 09:23:29.400729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:37.730 [2024-10-08 09:23:29.400735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:37.730 [2024-10-08 09:23:29.400742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:37.730 [2024-10-08 09:23:29.400751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:37.730 [2024-10-08 09:23:29.400756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:37.730 [2024-10-08 09:23:29.400762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:37.730 [2024-10-08 09:23:29.400768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:37.730 [2024-10-08 09:23:29.400773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:37.730 [2024-10-08 09:23:29.400779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:37.730 [2024-10-08 09:23:29.400784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:37.730 [2024-10-08 09:23:29.400789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:37.730 [2024-10-08 09:23:29.400795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:37.730 [2024-10-08 09:23:29.400801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:37.730 [2024-10-08 09:23:29.400807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:37.730 [2024-10-08 09:23:29.400812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:37.730 [2024-10-08 09:23:29.400818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:37.730 [2024-10-08 09:23:29.400824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:37.730 [2024-10-08 09:23:29.400830] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:37.730 [2024-10-08 09:23:29.400836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:37.730 [2024-10-08 09:23:29.400844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:37.730 [2024-10-08 09:23:29.400849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:37.730 [2024-10-08 09:23:29.400855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:37.730 [2024-10-08 09:23:29.400861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:37.730 [2024-10-08 09:23:29.400867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.730 [2024-10-08 09:23:29.400875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:37.730 [2024-10-08 09:23:29.400881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:17:37.730 [2024-10-08 09:23:29.400887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.989 [2024-10-08 09:23:29.437653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.989 [2024-10-08 09:23:29.437704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:37.989 [2024-10-08 09:23:29.437718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.711 ms 00:17:37.989 [2024-10-08 09:23:29.437726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.989 [2024-10-08 09:23:29.437878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.989 [2024-10-08 09:23:29.437891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:37.989 [2024-10-08 09:23:29.437902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:37.989 [2024-10-08 09:23:29.437910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.464768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.464800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:37.990 [2024-10-08 09:23:29.464809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.834 ms 00:17:37.990 [2024-10-08 09:23:29.464815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.464868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.464875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:37.990 [2024-10-08 09:23:29.464883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:37.990 [2024-10-08 09:23:29.464889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.465280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.465305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:37.990 [2024-10-08 09:23:29.465313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:17:37.990 [2024-10-08 09:23:29.465320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.465446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.465457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:37.990 [2024-10-08 09:23:29.465464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:37.990 [2024-10-08 09:23:29.465470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.477033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.477060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:37.990 [2024-10-08 09:23:29.477069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.545 ms 00:17:37.990 [2024-10-08 09:23:29.477075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.487371] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:37.990 [2024-10-08 09:23:29.487513] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:37.990 [2024-10-08 09:23:29.487527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.487535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:37.990 [2024-10-08 09:23:29.487542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.346 ms 00:17:37.990 [2024-10-08 09:23:29.487548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.506455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.506567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:37.990 [2024-10-08 09:23:29.506585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.847 ms 00:17:37.990 [2024-10-08 09:23:29.506592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.515517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.515545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:37.990 [2024-10-08 09:23:29.515553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.870 ms 00:17:37.990 [2024-10-08 09:23:29.515559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.524316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.524342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:37.990 [2024-10-08 09:23:29.524350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.713 ms 00:17:37.990 [2024-10-08 09:23:29.524356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.524840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.524859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:37.990 [2024-10-08 09:23:29.524867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:17:37.990 [2024-10-08 09:23:29.524873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.573210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.573258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:37.990 [2024-10-08 09:23:29.573270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.318 ms 00:17:37.990 [2024-10-08 09:23:29.573277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.581583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:37.990 [2024-10-08 09:23:29.596199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.596240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:37.990 [2024-10-08 09:23:29.596252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.845 ms 00:17:37.990 [2024-10-08 09:23:29.596259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.596357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.596367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:37.990 [2024-10-08 09:23:29.596374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:37.990 [2024-10-08 09:23:29.596381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.596450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.596478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:37.990 [2024-10-08 09:23:29.596485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:37.990 [2024-10-08 09:23:29.596491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.596510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.596517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:37.990 [2024-10-08 09:23:29.596524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:37.990 [2024-10-08 09:23:29.596530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.596562] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:37.990 [2024-10-08 09:23:29.596570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.596577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:37.990 [2024-10-08 09:23:29.596586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:37.990 [2024-10-08 09:23:29.596592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.615339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.615402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:37.990 [2024-10-08 09:23:29.615412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.732 ms 00:17:37.990 [2024-10-08 09:23:29.615418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.615498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.990 [2024-10-08 09:23:29.615510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:37.990 [2024-10-08 09:23:29.615517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:37.990 [2024-10-08 09:23:29.615524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.990 [2024-10-08 09:23:29.616303] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:37.990 [2024-10-08 09:23:29.618716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 240.931 ms, result 0 00:17:37.990 [2024-10-08 09:23:29.619454] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:37.990 [2024-10-08 09:23:29.634295] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:38.249  [2024-10-08T09:23:29.932Z] Copying: 4096/4096 [kB] (average 44 MBps)[2024-10-08 09:23:29.726237] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:38.249 [2024-10-08 09:23:29.734999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.249 [2024-10-08 09:23:29.735032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:38.249 [2024-10-08 09:23:29.735043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:38.249 [2024-10-08 09:23:29.735051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.249 [2024-10-08 09:23:29.735072] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:38.249 [2024-10-08 09:23:29.737869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.249 [2024-10-08 09:23:29.737894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:38.249 [2024-10-08 09:23:29.737905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.784 ms 00:17:38.249 [2024-10-08 09:23:29.737913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.249 [2024-10-08 09:23:29.739604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.249 [2024-10-08 09:23:29.739636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:38.249 [2024-10-08 09:23:29.739646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:17:38.249 [2024-10-08 09:23:29.739653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.249 [2024-10-08 09:23:29.743599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.249 [2024-10-08 09:23:29.743621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:38.249 [2024-10-08 09:23:29.743631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.929 ms 00:17:38.249 [2024-10-08 09:23:29.743638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.249 [2024-10-08 09:23:29.750543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.249 [2024-10-08 09:23:29.750568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:38.249 [2024-10-08 09:23:29.750581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.881 ms 00:17:38.249 [2024-10-08 09:23:29.750589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.249 [2024-10-08 09:23:29.772867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.249 [2024-10-08 09:23:29.772983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:38.249 [2024-10-08 09:23:29.772999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.226 ms 00:17:38.249 [2024-10-08 09:23:29.773006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.249 [2024-10-08 09:23:29.787338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.249 [2024-10-08 09:23:29.787378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:38.249 [2024-10-08 09:23:29.787404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.302 ms 00:17:38.249 [2024-10-08 09:23:29.787413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.249 [2024-10-08 09:23:29.787546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.249 [2024-10-08 09:23:29.787579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:38.249 [2024-10-08 09:23:29.787588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:38.249 [2024-10-08 09:23:29.787595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.249 [2024-10-08 09:23:29.810696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.249 [2024-10-08 09:23:29.810802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:38.249 [2024-10-08 09:23:29.810818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.080 ms 00:17:38.249 [2024-10-08 09:23:29.810825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.249 [2024-10-08 09:23:29.833723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.249 [2024-10-08 09:23:29.833824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:38.249 [2024-10-08 09:23:29.833838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.869 ms 00:17:38.249 [2024-10-08 09:23:29.833846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.249 [2024-10-08 09:23:29.856259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.249 [2024-10-08 09:23:29.856288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:38.249 [2024-10-08 09:23:29.856297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.383 ms 00:17:38.249 [2024-10-08 09:23:29.856304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.249 [2024-10-08 09:23:29.878785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.249 [2024-10-08 09:23:29.878813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:38.249 [2024-10-08 09:23:29.878822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.424 ms 00:17:38.249 [2024-10-08 09:23:29.878829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.249 [2024-10-08 09:23:29.878860] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:38.249 [2024-10-08 09:23:29.878874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:38.249 [2024-10-08 09:23:29.878999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:38.250 [2024-10-08 09:23:29.879690] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:38.250 [2024-10-08 09:23:29.879698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa396233-677a-41d2-8a2d-a8108e4f192f 00:17:38.250 [2024-10-08 09:23:29.879706] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:38.250 [2024-10-08 09:23:29.879713] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:38.250 [2024-10-08 09:23:29.879720] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:38.250 [2024-10-08 09:23:29.879730] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:38.250 [2024-10-08 09:23:29.879738] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:38.250 [2024-10-08 09:23:29.879745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:38.250 [2024-10-08 09:23:29.879753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:38.250 [2024-10-08 09:23:29.879759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:38.250 [2024-10-08 09:23:29.879765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:38.250 [2024-10-08 09:23:29.879773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.250 [2024-10-08 09:23:29.879780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:38.250 [2024-10-08 09:23:29.879789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:17:38.250 [2024-10-08 09:23:29.879796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.250 [2024-10-08 09:23:29.892229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.251 [2024-10-08 09:23:29.892262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:38.251 [2024-10-08 09:23:29.892273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.417 ms 00:17:38.251 [2024-10-08 09:23:29.892281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.251 [2024-10-08 09:23:29.892665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.251 [2024-10-08 09:23:29.892680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:38.251 [2024-10-08 09:23:29.892689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:17:38.251 [2024-10-08 09:23:29.892696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.251 [2024-10-08 09:23:29.924714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.251 [2024-10-08 09:23:29.924762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.251 [2024-10-08 09:23:29.924771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.251 [2024-10-08 09:23:29.924779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.251 [2024-10-08 09:23:29.924870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.251 [2024-10-08 09:23:29.924879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.251 [2024-10-08 09:23:29.924887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.251 [2024-10-08 09:23:29.924894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.251 [2024-10-08 09:23:29.924934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.251 [2024-10-08 09:23:29.924947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.251 [2024-10-08 09:23:29.924955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.251 [2024-10-08 09:23:29.924963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.251 [2024-10-08 09:23:29.924980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.251 [2024-10-08 09:23:29.924987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.251 [2024-10-08 09:23:29.924995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.251 [2024-10-08 09:23:29.925002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.508 [2024-10-08 09:23:30.006613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.508 [2024-10-08 09:23:30.006846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:38.508 [2024-10-08 09:23:30.006864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.508 [2024-10-08 09:23:30.006873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.508 [2024-10-08 09:23:30.072954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.508 [2024-10-08 09:23:30.073008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:38.508 [2024-10-08 09:23:30.073020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.508 [2024-10-08 09:23:30.073028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.508 [2024-10-08 09:23:30.073118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.508 [2024-10-08 09:23:30.073129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:38.508 [2024-10-08 09:23:30.073140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.508 [2024-10-08 09:23:30.073149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.508 [2024-10-08 09:23:30.073180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.508 [2024-10-08 09:23:30.073189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:38.508 [2024-10-08 09:23:30.073198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.508 [2024-10-08 09:23:30.073205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.508 [2024-10-08 09:23:30.073301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.508 [2024-10-08 09:23:30.073311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:38.508 [2024-10-08 09:23:30.073320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.508 [2024-10-08 09:23:30.073330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.508 [2024-10-08 09:23:30.073365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.508 [2024-10-08 09:23:30.073375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:38.508 [2024-10-08 09:23:30.073383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.508 [2024-10-08 09:23:30.073415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.509 [2024-10-08 09:23:30.073456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.509 [2024-10-08 09:23:30.073467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:38.509 [2024-10-08 09:23:30.073475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.509 [2024-10-08 09:23:30.073485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.509 [2024-10-08 09:23:30.073531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.509 [2024-10-08 09:23:30.073541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:38.509 [2024-10-08 09:23:30.073549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.509 [2024-10-08 09:23:30.073557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.509 [2024-10-08 09:23:30.073710] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.690 ms, result 0 00:17:39.443 00:17:39.443 00:17:39.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.443 09:23:30 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74362 00:17:39.443 09:23:30 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74362 00:17:39.443 09:23:30 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74362 ']' 00:17:39.443 09:23:30 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.443 09:23:30 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.443 09:23:30 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.443 09:23:30 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.443 09:23:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:39.443 09:23:30 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:39.443 [2024-10-08 09:23:31.021199] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:17:39.443 [2024-10-08 09:23:31.021327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74362 ] 00:17:39.701 [2024-10-08 09:23:31.167256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.701 [2024-10-08 09:23:31.373733] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.635 09:23:32 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.635 09:23:32 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:40.635 09:23:32 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:40.635 [2024-10-08 09:23:32.214278] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.635 [2024-10-08 09:23:32.214574] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.895 [2024-10-08 09:23:32.386187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.895 [2024-10-08 09:23:32.386436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:40.895 [2024-10-08 09:23:32.386463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.895 [2024-10-08 09:23:32.386473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.895 [2024-10-08 09:23:32.389253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.895 [2024-10-08 09:23:32.389289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.895 [2024-10-08 09:23:32.389301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.752 ms 00:17:40.895 [2024-10-08 09:23:32.389310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.895 [2024-10-08 09:23:32.389413] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:40.895 [2024-10-08 09:23:32.390319] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:40.895 [2024-10-08 09:23:32.390499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.895 [2024-10-08 09:23:32.390533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.895 [2024-10-08 09:23:32.390565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:17:40.895 [2024-10-08 09:23:32.390590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.895 [2024-10-08 09:23:32.393085] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:40.895 [2024-10-08 09:23:32.410247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.895 [2024-10-08 09:23:32.410287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:40.895 [2024-10-08 09:23:32.410300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.180 ms 00:17:40.895 [2024-10-08 09:23:32.410310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.895 [2024-10-08 09:23:32.410417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.895 [2024-10-08 09:23:32.410434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:40.895 [2024-10-08 09:23:32.410443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:40.895 [2024-10-08 09:23:32.410453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.895 [2024-10-08 09:23:32.417086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.895 [2024-10-08 09:23:32.417235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.895 [2024-10-08 09:23:32.417251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.581 ms 00:17:40.895 [2024-10-08 09:23:32.417261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.895 [2024-10-08 09:23:32.417410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.895 [2024-10-08 09:23:32.417424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.895 [2024-10-08 09:23:32.417433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:40.895 [2024-10-08 09:23:32.417443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.895 [2024-10-08 09:23:32.417474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.895 [2024-10-08 09:23:32.417486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:40.895 [2024-10-08 09:23:32.417494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:40.895 [2024-10-08 09:23:32.417504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.895 [2024-10-08 09:23:32.417530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:40.895 [2024-10-08 09:23:32.421070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.895 [2024-10-08 09:23:32.421115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.895 [2024-10-08 09:23:32.421127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.546 ms 00:17:40.895 [2024-10-08 09:23:32.421137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.895 [2024-10-08 09:23:32.421184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.895 [2024-10-08 09:23:32.421193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:40.895 [2024-10-08 09:23:32.421203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:40.895 [2024-10-08 09:23:32.421211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.895 [2024-10-08 09:23:32.421234] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:40.895 [2024-10-08 09:23:32.421254] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:40.895 [2024-10-08 09:23:32.421298] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:40.895 [2024-10-08 09:23:32.421315] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:40.895 [2024-10-08 09:23:32.421436] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:40.895 [2024-10-08 09:23:32.421448] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:40.895 [2024-10-08 09:23:32.421462] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:40.895 [2024-10-08 09:23:32.421472] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:40.895 [2024-10-08 09:23:32.421483] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:40.895 [2024-10-08 09:23:32.421492] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:40.895 [2024-10-08 09:23:32.421501] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:40.895 [2024-10-08 09:23:32.421508] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:40.895 [2024-10-08 09:23:32.421519] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:40.895 [2024-10-08 09:23:32.421529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.895 [2024-10-08 09:23:32.421537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:40.895 [2024-10-08 09:23:32.421545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:17:40.895 [2024-10-08 09:23:32.421554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.895 [2024-10-08 09:23:32.421656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.895 [2024-10-08 09:23:32.421668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:40.895 [2024-10-08 09:23:32.421676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:40.895 [2024-10-08 09:23:32.421684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.895 [2024-10-08 09:23:32.421788] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:40.895 [2024-10-08 09:23:32.421802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:40.895 [2024-10-08 09:23:32.421810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.895 [2024-10-08 09:23:32.421819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.895 [2024-10-08 09:23:32.421827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:40.895 [2024-10-08 09:23:32.421835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:40.895 [2024-10-08 09:23:32.421842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:40.895 [2024-10-08 09:23:32.421854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:40.895 [2024-10-08 09:23:32.421862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:40.895 [2024-10-08 09:23:32.421870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.895 [2024-10-08 09:23:32.421876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:40.895 [2024-10-08 09:23:32.421884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:40.895 [2024-10-08 09:23:32.421890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.895 [2024-10-08 09:23:32.421898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:40.895 [2024-10-08 09:23:32.421904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:40.895 [2024-10-08 09:23:32.421913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.895 [2024-10-08 09:23:32.421919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:40.895 [2024-10-08 09:23:32.421927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:40.895 [2024-10-08 09:23:32.421943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.895 [2024-10-08 09:23:32.421952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:40.895 [2024-10-08 09:23:32.421958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:40.895 [2024-10-08 09:23:32.421966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.895 [2024-10-08 09:23:32.421973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:40.895 [2024-10-08 09:23:32.421983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:40.895 [2024-10-08 09:23:32.421990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.895 [2024-10-08 09:23:32.421998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:40.895 [2024-10-08 09:23:32.422006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:40.895 [2024-10-08 09:23:32.422014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.895 [2024-10-08 09:23:32.422021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:40.895 [2024-10-08 09:23:32.422029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:40.896 [2024-10-08 09:23:32.422036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.896 [2024-10-08 09:23:32.422043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:40.896 [2024-10-08 09:23:32.422050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:40.896 [2024-10-08 09:23:32.422059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.896 [2024-10-08 09:23:32.422066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:40.896 [2024-10-08 09:23:32.422074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:40.896 [2024-10-08 09:23:32.422080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.896 [2024-10-08 09:23:32.422088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:40.896 [2024-10-08 09:23:32.422094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:40.896 [2024-10-08 09:23:32.422105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.896 [2024-10-08 09:23:32.422111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:40.896 [2024-10-08 09:23:32.422119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:40.896 [2024-10-08 09:23:32.422126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.896 [2024-10-08 09:23:32.422134] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:40.896 [2024-10-08 09:23:32.422141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:40.896 [2024-10-08 09:23:32.422151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.896 [2024-10-08 09:23:32.422158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.896 [2024-10-08 09:23:32.422167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:40.896 [2024-10-08 09:23:32.422174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:40.896 [2024-10-08 09:23:32.422182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:40.896 [2024-10-08 09:23:32.422190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:40.896 [2024-10-08 09:23:32.422199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:40.896 [2024-10-08 09:23:32.422205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:40.896 [2024-10-08 09:23:32.422215] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:40.896 [2024-10-08 09:23:32.422224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.896 [2024-10-08 09:23:32.422236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:40.896 [2024-10-08 09:23:32.422243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:40.896 [2024-10-08 09:23:32.422252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:40.896 [2024-10-08 09:23:32.422259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:40.896 [2024-10-08 09:23:32.422269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:40.896 [2024-10-08 09:23:32.422276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:40.896 [2024-10-08 09:23:32.422284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:40.896 [2024-10-08 09:23:32.422291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:40.896 [2024-10-08 09:23:32.422300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:40.896 [2024-10-08 09:23:32.422307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:40.896 [2024-10-08 09:23:32.422316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:40.896 [2024-10-08 09:23:32.422322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:40.896 [2024-10-08 09:23:32.422331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:40.896 [2024-10-08 09:23:32.422338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:40.896 [2024-10-08 09:23:32.422347] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:40.896 [2024-10-08 09:23:32.422354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.896 [2024-10-08 09:23:32.422367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:40.896 [2024-10-08 09:23:32.422374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:40.896 [2024-10-08 09:23:32.422383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:40.896 [2024-10-08 09:23:32.422402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:40.896 [2024-10-08 09:23:32.422411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.422419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:40.896 [2024-10-08 09:23:32.422428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:17:40.896 [2024-10-08 09:23:32.422435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.896 [2024-10-08 09:23:32.451463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.451662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.896 [2024-10-08 09:23:32.451685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.963 ms 00:17:40.896 [2024-10-08 09:23:32.451694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.896 [2024-10-08 09:23:32.451850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.451861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.896 [2024-10-08 09:23:32.451871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:40.896 [2024-10-08 09:23:32.451878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.896 [2024-10-08 09:23:32.492123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.492183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.896 [2024-10-08 09:23:32.492201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.215 ms 00:17:40.896 [2024-10-08 09:23:32.492211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.896 [2024-10-08 09:23:32.492354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.492365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.896 [2024-10-08 09:23:32.492377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:40.896 [2024-10-08 09:23:32.492403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.896 [2024-10-08 09:23:32.492834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.492851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.896 [2024-10-08 09:23:32.492863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:17:40.896 [2024-10-08 09:23:32.492871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.896 [2024-10-08 09:23:32.493018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.493080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.896 [2024-10-08 09:23:32.493094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:40.896 [2024-10-08 09:23:32.493102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.896 [2024-10-08 09:23:32.510161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.510216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.896 [2024-10-08 09:23:32.510232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.030 ms 00:17:40.896 [2024-10-08 09:23:32.510243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.896 [2024-10-08 09:23:32.523254] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:40.896 [2024-10-08 09:23:32.523299] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:40.896 [2024-10-08 09:23:32.523314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.523324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:40.896 [2024-10-08 09:23:32.523336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.932 ms 00:17:40.896 [2024-10-08 09:23:32.523344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.896 [2024-10-08 09:23:32.547993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.548043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:40.896 [2024-10-08 09:23:32.548057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.511 ms 00:17:40.896 [2024-10-08 09:23:32.548072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.896 [2024-10-08 09:23:32.560108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.560155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:40.896 [2024-10-08 09:23:32.560171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.926 ms 00:17:40.896 [2024-10-08 09:23:32.560179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.896 [2024-10-08 09:23:32.571647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.571681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:40.896 [2024-10-08 09:23:32.571694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.371 ms 00:17:40.896 [2024-10-08 09:23:32.571702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.896 [2024-10-08 09:23:32.572353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.896 [2024-10-08 09:23:32.572370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:40.896 [2024-10-08 09:23:32.572382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:17:40.896 [2024-10-08 09:23:32.572426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.155 [2024-10-08 09:23:32.631401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.155 [2024-10-08 09:23:32.631476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:41.155 [2024-10-08 09:23:32.631494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.928 ms 00:17:41.155 [2024-10-08 09:23:32.631504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.155 [2024-10-08 09:23:32.642309] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:41.155 [2024-10-08 09:23:32.658840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.155 [2024-10-08 09:23:32.658888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:41.155 [2024-10-08 09:23:32.658901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.227 ms 00:17:41.155 [2024-10-08 09:23:32.658911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.155 [2024-10-08 09:23:32.659005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.155 [2024-10-08 09:23:32.659017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:41.155 [2024-10-08 09:23:32.659026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:41.155 [2024-10-08 09:23:32.659036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.155 [2024-10-08 09:23:32.659090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.155 [2024-10-08 09:23:32.659101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:41.155 [2024-10-08 09:23:32.659109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:41.155 [2024-10-08 09:23:32.659119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.155 [2024-10-08 09:23:32.659144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.155 [2024-10-08 09:23:32.659154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:41.155 [2024-10-08 09:23:32.659162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:41.155 [2024-10-08 09:23:32.659177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.155 [2024-10-08 09:23:32.659210] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:41.155 [2024-10-08 09:23:32.659226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.155 [2024-10-08 09:23:32.659234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:41.155 [2024-10-08 09:23:32.659243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:41.155 [2024-10-08 09:23:32.659250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.155 [2024-10-08 09:23:32.683026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.155 [2024-10-08 09:23:32.683060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:41.155 [2024-10-08 09:23:32.683074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.750 ms 00:17:41.155 [2024-10-08 09:23:32.683082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.155 [2024-10-08 09:23:32.683177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.155 [2024-10-08 09:23:32.683187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:41.155 [2024-10-08 09:23:32.683198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:41.155 [2024-10-08 09:23:32.683205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.155 [2024-10-08 09:23:32.684106] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.155 [2024-10-08 09:23:32.687012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 297.604 ms, result 0 00:17:41.155 [2024-10-08 09:23:32.688173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.155 Some configs were skipped because the RPC state that can call them passed over. 00:17:41.156 09:23:32 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:41.414 [2024-10-08 09:23:32.919117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.414 [2024-10-08 09:23:32.919372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:41.414 [2024-10-08 09:23:32.919466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.932 ms 00:17:41.414 [2024-10-08 09:23:32.919498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.414 [2024-10-08 09:23:32.919581] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.402 ms, result 0 00:17:41.414 true 00:17:41.414 09:23:32 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:41.671 [2024-10-08 09:23:33.127006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.671 [2024-10-08 09:23:33.127209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:41.671 [2024-10-08 09:23:33.127264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.549 ms 00:17:41.671 [2024-10-08 09:23:33.127287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.671 [2024-10-08 09:23:33.127343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.890 ms, result 0 00:17:41.671 true 00:17:41.671 09:23:33 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74362 00:17:41.671 09:23:33 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74362 ']' 00:17:41.671 09:23:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74362 00:17:41.671 09:23:33 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:41.671 09:23:33 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:41.671 09:23:33 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74362 00:17:41.671 killing process with pid 74362 00:17:41.671 09:23:33 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:41.671 09:23:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:41.671 09:23:33 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74362' 00:17:41.671 09:23:33 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74362 00:17:41.671 09:23:33 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74362 00:17:42.239 [2024-10-08 09:23:33.834746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-10-08 09:23:33.834816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:42.239 [2024-10-08 09:23:33.834829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:42.239 [2024-10-08 09:23:33.834837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-10-08 09:23:33.834857] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:42.239 [2024-10-08 09:23:33.837074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-10-08 09:23:33.837103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:42.239 [2024-10-08 09:23:33.837115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.200 ms 00:17:42.239 [2024-10-08 09:23:33.837122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-10-08 09:23:33.837353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-10-08 09:23:33.837361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:42.239 [2024-10-08 09:23:33.837369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:17:42.239 [2024-10-08 09:23:33.837377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-10-08 09:23:33.840681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-10-08 09:23:33.840707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:42.239 [2024-10-08 09:23:33.840716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.267 ms 00:17:42.239 [2024-10-08 09:23:33.840722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-10-08 09:23:33.846035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-10-08 09:23:33.846060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:42.239 [2024-10-08 09:23:33.846072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.283 ms 00:17:42.239 [2024-10-08 09:23:33.846080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-10-08 09:23:33.853945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-10-08 09:23:33.853972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:42.239 [2024-10-08 09:23:33.853984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.818 ms 00:17:42.239 [2024-10-08 09:23:33.853990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-10-08 09:23:33.860920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-10-08 09:23:33.860947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:42.239 [2024-10-08 09:23:33.860958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.896 ms 00:17:42.239 [2024-10-08 09:23:33.860972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-10-08 09:23:33.861084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-10-08 09:23:33.861093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:42.239 [2024-10-08 09:23:33.861101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:42.239 [2024-10-08 09:23:33.861110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-10-08 09:23:33.869077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-10-08 09:23:33.869103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:42.239 [2024-10-08 09:23:33.869112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.949 ms 00:17:42.239 [2024-10-08 09:23:33.869118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-10-08 09:23:33.876702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-10-08 09:23:33.876739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:42.239 [2024-10-08 09:23:33.876753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.551 ms 00:17:42.239 [2024-10-08 09:23:33.876759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-10-08 09:23:33.884086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-10-08 09:23:33.884243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:42.239 [2024-10-08 09:23:33.884258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.295 ms 00:17:42.239 [2024-10-08 09:23:33.884264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-10-08 09:23:33.891317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.239 [2024-10-08 09:23:33.891477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:42.239 [2024-10-08 09:23:33.891492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.999 ms 00:17:42.239 [2024-10-08 09:23:33.891497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.239 [2024-10-08 09:23:33.891533] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:42.239 [2024-10-08 09:23:33.891548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:42.239 [2024-10-08 09:23:33.891794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.891993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:42.240 [2024-10-08 09:23:33.892228] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:42.240 [2024-10-08 09:23:33.892237] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa396233-677a-41d2-8a2d-a8108e4f192f 00:17:42.240 [2024-10-08 09:23:33.892243] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:42.240 [2024-10-08 09:23:33.892250] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:42.240 [2024-10-08 09:23:33.892256] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:42.240 [2024-10-08 09:23:33.892264] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:42.240 [2024-10-08 09:23:33.892275] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:42.240 [2024-10-08 09:23:33.892282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:42.240 [2024-10-08 09:23:33.892290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:42.240 [2024-10-08 09:23:33.892296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:42.240 [2024-10-08 09:23:33.892300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:42.240 [2024-10-08 09:23:33.892307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.240 [2024-10-08 09:23:33.892313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:42.240 [2024-10-08 09:23:33.892322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:17:42.240 [2024-10-08 09:23:33.892328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.240 [2024-10-08 09:23:33.902432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.240 [2024-10-08 09:23:33.902541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:42.240 [2024-10-08 09:23:33.902559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.086 ms 00:17:42.240 [2024-10-08 09:23:33.902565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.240 [2024-10-08 09:23:33.902880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.240 [2024-10-08 09:23:33.902895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:42.240 [2024-10-08 09:23:33.902903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:17:42.240 [2024-10-08 09:23:33.902910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.498 [2024-10-08 09:23:33.935299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.498 [2024-10-08 09:23:33.935349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.498 [2024-10-08 09:23:33.935362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.498 [2024-10-08 09:23:33.935383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.498 [2024-10-08 09:23:33.935516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.498 [2024-10-08 09:23:33.935524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.498 [2024-10-08 09:23:33.935532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.498 [2024-10-08 09:23:33.935538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.498 [2024-10-08 09:23:33.935584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.498 [2024-10-08 09:23:33.935593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.498 [2024-10-08 09:23:33.935604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.498 [2024-10-08 09:23:33.935610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.498 [2024-10-08 09:23:33.935629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.498 [2024-10-08 09:23:33.935635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.498 [2024-10-08 09:23:33.935643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.498 [2024-10-08 09:23:33.935649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.498 [2024-10-08 09:23:33.998083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.498 [2024-10-08 09:23:33.998313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.498 [2024-10-08 09:23:33.998336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.498 [2024-10-08 09:23:33.998343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.498 [2024-10-08 09:23:34.050568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.498 [2024-10-08 09:23:34.050628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.498 [2024-10-08 09:23:34.050641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.498 [2024-10-08 09:23:34.050648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.498 [2024-10-08 09:23:34.050746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.498 [2024-10-08 09:23:34.050754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.498 [2024-10-08 09:23:34.050765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.498 [2024-10-08 09:23:34.050771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.498 [2024-10-08 09:23:34.050799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.498 [2024-10-08 09:23:34.050808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.499 [2024-10-08 09:23:34.050816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.499 [2024-10-08 09:23:34.050822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-10-08 09:23:34.050900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.499 [2024-10-08 09:23:34.050907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.499 [2024-10-08 09:23:34.050916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.499 [2024-10-08 09:23:34.050922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-10-08 09:23:34.050949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.499 [2024-10-08 09:23:34.050955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:42.499 [2024-10-08 09:23:34.050965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.499 [2024-10-08 09:23:34.050970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-10-08 09:23:34.051007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.499 [2024-10-08 09:23:34.051014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.499 [2024-10-08 09:23:34.051023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.499 [2024-10-08 09:23:34.051029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-10-08 09:23:34.051069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.499 [2024-10-08 09:23:34.051078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.499 [2024-10-08 09:23:34.051087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.499 [2024-10-08 09:23:34.051092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.499 [2024-10-08 09:23:34.051212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 216.447 ms, result 0 00:17:43.064 09:23:34 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:43.322 [2024-10-08 09:23:34.762506] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:17:43.322 [2024-10-08 09:23:34.762631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74420 ] 00:17:43.322 [2024-10-08 09:23:34.910220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.580 [2024-10-08 09:23:35.082270] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.839 [2024-10-08 09:23:35.312675] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.839 [2024-10-08 09:23:35.312743] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.839 [2024-10-08 09:23:35.466443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.839 [2024-10-08 09:23:35.466498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:43.839 [2024-10-08 09:23:35.466513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:43.839 [2024-10-08 09:23:35.466520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.839 [2024-10-08 09:23:35.468726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.839 [2024-10-08 09:23:35.468756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:43.839 [2024-10-08 09:23:35.468764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.192 ms 00:17:43.839 [2024-10-08 09:23:35.468770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.839 [2024-10-08 09:23:35.468837] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:43.839 [2024-10-08 09:23:35.469374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:43.839 [2024-10-08 09:23:35.469407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.839 [2024-10-08 09:23:35.469415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:43.839 [2024-10-08 09:23:35.469423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:17:43.839 [2024-10-08 09:23:35.469429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.839 [2024-10-08 09:23:35.470718] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:43.839 [2024-10-08 09:23:35.480830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.839 [2024-10-08 09:23:35.480860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:43.839 [2024-10-08 09:23:35.480869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.114 ms 00:17:43.839 [2024-10-08 09:23:35.480876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.839 [2024-10-08 09:23:35.480951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.839 [2024-10-08 09:23:35.480960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:43.839 [2024-10-08 09:23:35.480971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:43.839 [2024-10-08 09:23:35.480977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.839 [2024-10-08 09:23:35.487309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.839 [2024-10-08 09:23:35.487336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:43.839 [2024-10-08 09:23:35.487344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.298 ms 00:17:43.839 [2024-10-08 09:23:35.487351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.839 [2024-10-08 09:23:35.487446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.839 [2024-10-08 09:23:35.487457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:43.839 [2024-10-08 09:23:35.487464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:43.839 [2024-10-08 09:23:35.487471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.839 [2024-10-08 09:23:35.487491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.839 [2024-10-08 09:23:35.487497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:43.839 [2024-10-08 09:23:35.487504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:43.839 [2024-10-08 09:23:35.487509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.839 [2024-10-08 09:23:35.487525] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:43.839 [2024-10-08 09:23:35.490431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.839 [2024-10-08 09:23:35.490454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:43.839 [2024-10-08 09:23:35.490462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.909 ms 00:17:43.839 [2024-10-08 09:23:35.490468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.839 [2024-10-08 09:23:35.490498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.839 [2024-10-08 09:23:35.490508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:43.840 [2024-10-08 09:23:35.490515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:43.840 [2024-10-08 09:23:35.490521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.840 [2024-10-08 09:23:35.490535] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:43.840 [2024-10-08 09:23:35.490551] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:43.840 [2024-10-08 09:23:35.490579] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:43.840 [2024-10-08 09:23:35.490591] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:43.840 [2024-10-08 09:23:35.490675] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:43.840 [2024-10-08 09:23:35.490683] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:43.840 [2024-10-08 09:23:35.490692] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:43.840 [2024-10-08 09:23:35.490700] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:43.840 [2024-10-08 09:23:35.490707] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:43.840 [2024-10-08 09:23:35.490714] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:43.840 [2024-10-08 09:23:35.490720] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:43.840 [2024-10-08 09:23:35.490726] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:43.840 [2024-10-08 09:23:35.490732] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:43.840 [2024-10-08 09:23:35.490738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.840 [2024-10-08 09:23:35.490744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:43.840 [2024-10-08 09:23:35.490753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:17:43.840 [2024-10-08 09:23:35.490759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.840 [2024-10-08 09:23:35.490827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.840 [2024-10-08 09:23:35.490834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:43.840 [2024-10-08 09:23:35.490840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:43.840 [2024-10-08 09:23:35.490846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.840 [2024-10-08 09:23:35.490920] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:43.840 [2024-10-08 09:23:35.490927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:43.840 [2024-10-08 09:23:35.490934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.840 [2024-10-08 09:23:35.490942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.840 [2024-10-08 09:23:35.490948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:43.840 [2024-10-08 09:23:35.490954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:43.840 [2024-10-08 09:23:35.490960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:43.840 [2024-10-08 09:23:35.490966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:43.840 [2024-10-08 09:23:35.490971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:43.840 [2024-10-08 09:23:35.490977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.840 [2024-10-08 09:23:35.490982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:43.840 [2024-10-08 09:23:35.490993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:43.840 [2024-10-08 09:23:35.490999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.840 [2024-10-08 09:23:35.491004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:43.840 [2024-10-08 09:23:35.491009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:43.840 [2024-10-08 09:23:35.491015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.840 [2024-10-08 09:23:35.491019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:43.840 [2024-10-08 09:23:35.491025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:43.840 [2024-10-08 09:23:35.491031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.840 [2024-10-08 09:23:35.491037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:43.840 [2024-10-08 09:23:35.491042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:43.840 [2024-10-08 09:23:35.491048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.840 [2024-10-08 09:23:35.491053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:43.840 [2024-10-08 09:23:35.491058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:43.840 [2024-10-08 09:23:35.491063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.840 [2024-10-08 09:23:35.491068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:43.840 [2024-10-08 09:23:35.491074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:43.840 [2024-10-08 09:23:35.491079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.840 [2024-10-08 09:23:35.491084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:43.840 [2024-10-08 09:23:35.491089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:43.840 [2024-10-08 09:23:35.491094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.840 [2024-10-08 09:23:35.491100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:43.840 [2024-10-08 09:23:35.491105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:43.840 [2024-10-08 09:23:35.491110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.840 [2024-10-08 09:23:35.491115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:43.840 [2024-10-08 09:23:35.491120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:43.840 [2024-10-08 09:23:35.491125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.840 [2024-10-08 09:23:35.491131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:43.840 [2024-10-08 09:23:35.491136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:43.840 [2024-10-08 09:23:35.491141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.840 [2024-10-08 09:23:35.491146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:43.840 [2024-10-08 09:23:35.491152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:43.840 [2024-10-08 09:23:35.491158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.840 [2024-10-08 09:23:35.491163] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:43.840 [2024-10-08 09:23:35.491170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:43.840 [2024-10-08 09:23:35.491176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.840 [2024-10-08 09:23:35.491181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.840 [2024-10-08 09:23:35.491187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:43.840 [2024-10-08 09:23:35.491192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:43.840 [2024-10-08 09:23:35.491198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:43.841 [2024-10-08 09:23:35.491204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:43.841 [2024-10-08 09:23:35.491210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:43.841 [2024-10-08 09:23:35.491215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:43.841 [2024-10-08 09:23:35.491222] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:43.841 [2024-10-08 09:23:35.491229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.841 [2024-10-08 09:23:35.491238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:43.841 [2024-10-08 09:23:35.491244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:43.841 [2024-10-08 09:23:35.491250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:43.841 [2024-10-08 09:23:35.491255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:43.841 [2024-10-08 09:23:35.491261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:43.841 [2024-10-08 09:23:35.491266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:43.841 [2024-10-08 09:23:35.491272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:43.841 [2024-10-08 09:23:35.491277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:43.841 [2024-10-08 09:23:35.491283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:43.841 [2024-10-08 09:23:35.491288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:43.841 [2024-10-08 09:23:35.491294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:43.841 [2024-10-08 09:23:35.491299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:43.841 [2024-10-08 09:23:35.491305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:43.841 [2024-10-08 09:23:35.491310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:43.841 [2024-10-08 09:23:35.491317] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:43.841 [2024-10-08 09:23:35.491323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.841 [2024-10-08 09:23:35.491330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:43.841 [2024-10-08 09:23:35.491335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:43.841 [2024-10-08 09:23:35.491341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:43.841 [2024-10-08 09:23:35.491347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:43.841 [2024-10-08 09:23:35.491352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.841 [2024-10-08 09:23:35.491360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:43.841 [2024-10-08 09:23:35.491382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:17:43.841 [2024-10-08 09:23:35.491611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.533659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.533802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.100 [2024-10-08 09:23:35.533854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.952 ms 00:17:44.100 [2024-10-08 09:23:35.533873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.533996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.534023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.100 [2024-10-08 09:23:35.534040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:44.100 [2024-10-08 09:23:35.534055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.560450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.560565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.100 [2024-10-08 09:23:35.560614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.367 ms 00:17:44.100 [2024-10-08 09:23:35.560632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.560718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.560744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.100 [2024-10-08 09:23:35.560761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:44.100 [2024-10-08 09:23:35.560776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.561174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.561214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.100 [2024-10-08 09:23:35.561232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:17:44.100 [2024-10-08 09:23:35.561324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.561461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.561548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.100 [2024-10-08 09:23:35.561660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:44.100 [2024-10-08 09:23:35.561686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.573196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.573285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.100 [2024-10-08 09:23:35.573324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.477 ms 00:17:44.100 [2024-10-08 09:23:35.573342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.583534] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:44.100 [2024-10-08 09:23:35.583636] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:44.100 [2024-10-08 09:23:35.583649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.583656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:44.100 [2024-10-08 09:23:35.583663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.199 ms 00:17:44.100 [2024-10-08 09:23:35.583669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.613641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.613680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:44.100 [2024-10-08 09:23:35.613699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.910 ms 00:17:44.100 [2024-10-08 09:23:35.613708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.625129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.625164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:44.100 [2024-10-08 09:23:35.625175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.340 ms 00:17:44.100 [2024-10-08 09:23:35.625183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.636436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.636466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:44.100 [2024-10-08 09:23:35.636476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.188 ms 00:17:44.100 [2024-10-08 09:23:35.636484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.637101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.637126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.100 [2024-10-08 09:23:35.637137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:17:44.100 [2024-10-08 09:23:35.637145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.695951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.696004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:44.100 [2024-10-08 09:23:35.696019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.781 ms 00:17:44.100 [2024-10-08 09:23:35.696027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.706883] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:44.100 [2024-10-08 09:23:35.723941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.723985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.100 [2024-10-08 09:23:35.723999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.787 ms 00:17:44.100 [2024-10-08 09:23:35.724008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.724114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.724125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:44.100 [2024-10-08 09:23:35.724135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:44.100 [2024-10-08 09:23:35.724143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.724202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.724215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.100 [2024-10-08 09:23:35.724224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:44.100 [2024-10-08 09:23:35.724231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.724254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.100 [2024-10-08 09:23:35.724263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.100 [2024-10-08 09:23:35.724271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:44.100 [2024-10-08 09:23:35.724279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.100 [2024-10-08 09:23:35.724315] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:44.100 [2024-10-08 09:23:35.724326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.101 [2024-10-08 09:23:35.724334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:44.101 [2024-10-08 09:23:35.724344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:44.101 [2024-10-08 09:23:35.724352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.101 [2024-10-08 09:23:35.747983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.101 [2024-10-08 09:23:35.748020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.101 [2024-10-08 09:23:35.748032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.611 ms 00:17:44.101 [2024-10-08 09:23:35.748040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.101 [2024-10-08 09:23:35.748134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.101 [2024-10-08 09:23:35.748148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.101 [2024-10-08 09:23:35.748158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:44.101 [2024-10-08 09:23:35.748166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.101 [2024-10-08 09:23:35.749032] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.101 [2024-10-08 09:23:35.752046] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 282.272 ms, result 0 00:17:44.101 [2024-10-08 09:23:35.752884] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.101 [2024-10-08 09:23:35.765585] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:45.472  [2024-10-08T09:23:38.088Z] Copying: 43/256 [MB] (43 MBps) [2024-10-08T09:23:39.025Z] Copying: 86/256 [MB] (42 MBps) [2024-10-08T09:23:39.959Z] Copying: 131/256 [MB] (45 MBps) [2024-10-08T09:23:40.891Z] Copying: 172/256 [MB] (40 MBps) [2024-10-08T09:23:41.823Z] Copying: 218/256 [MB] (45 MBps) [2024-10-08T09:23:42.389Z] Copying: 256/256 [MB] (average 43 MBps)[2024-10-08 09:23:42.103754] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:50.706 [2024-10-08 09:23:42.116460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.706 [2024-10-08 09:23:42.116497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:50.706 [2024-10-08 09:23:42.116512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:50.706 [2024-10-08 09:23:42.116521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.707 [2024-10-08 09:23:42.116545] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:50.707 [2024-10-08 09:23:42.119332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.707 [2024-10-08 09:23:42.119363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:50.707 [2024-10-08 09:23:42.119401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.771 ms 00:17:50.707 [2024-10-08 09:23:42.119411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.707 [2024-10-08 09:23:42.119691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.707 [2024-10-08 09:23:42.119705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:50.707 [2024-10-08 09:23:42.119717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:17:50.707 [2024-10-08 09:23:42.119724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.707 [2024-10-08 09:23:42.123647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.707 [2024-10-08 09:23:42.123668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:50.707 [2024-10-08 09:23:42.123679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.906 ms 00:17:50.707 [2024-10-08 09:23:42.123689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.707 [2024-10-08 09:23:42.130606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.707 [2024-10-08 09:23:42.130754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:50.707 [2024-10-08 09:23:42.130777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.897 ms 00:17:50.707 [2024-10-08 09:23:42.130785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.707 [2024-10-08 09:23:42.154413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.707 [2024-10-08 09:23:42.154527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:50.707 [2024-10-08 09:23:42.154588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.558 ms 00:17:50.707 [2024-10-08 09:23:42.154611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.707 [2024-10-08 09:23:42.168557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.707 [2024-10-08 09:23:42.168667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:50.707 [2024-10-08 09:23:42.168724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.907 ms 00:17:50.707 [2024-10-08 09:23:42.168747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.707 [2024-10-08 09:23:42.168929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.707 [2024-10-08 09:23:42.169021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:50.707 [2024-10-08 09:23:42.169046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:50.707 [2024-10-08 09:23:42.169066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.707 [2024-10-08 09:23:42.192458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.707 [2024-10-08 09:23:42.192569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:50.707 [2024-10-08 09:23:42.192583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.354 ms 00:17:50.707 [2024-10-08 09:23:42.192592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.707 [2024-10-08 09:23:42.215028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.707 [2024-10-08 09:23:42.215058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:50.707 [2024-10-08 09:23:42.215068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.413 ms 00:17:50.707 [2024-10-08 09:23:42.215076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.707 [2024-10-08 09:23:42.237118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.707 [2024-10-08 09:23:42.237216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:50.707 [2024-10-08 09:23:42.237263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.021 ms 00:17:50.707 [2024-10-08 09:23:42.237284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.707 [2024-10-08 09:23:42.259526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.707 [2024-10-08 09:23:42.259626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:50.707 [2024-10-08 09:23:42.259673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.181 ms 00:17:50.707 [2024-10-08 09:23:42.259694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.707 [2024-10-08 09:23:42.259726] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:50.707 [2024-10-08 09:23:42.259752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.259995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:50.707 [2024-10-08 09:23:42.260224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:50.708 [2024-10-08 09:23:42.260598] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:50.708 [2024-10-08 09:23:42.260607] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa396233-677a-41d2-8a2d-a8108e4f192f 00:17:50.708 [2024-10-08 09:23:42.260615] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:50.708 [2024-10-08 09:23:42.260622] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:50.708 [2024-10-08 09:23:42.260630] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:50.708 [2024-10-08 09:23:42.260641] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:50.708 [2024-10-08 09:23:42.260649] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:50.708 [2024-10-08 09:23:42.260657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:50.708 [2024-10-08 09:23:42.260664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:50.708 [2024-10-08 09:23:42.260671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:50.708 [2024-10-08 09:23:42.260678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:50.708 [2024-10-08 09:23:42.260686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.708 [2024-10-08 09:23:42.260693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:50.708 [2024-10-08 09:23:42.260702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:17:50.708 [2024-10-08 09:23:42.260708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.708 [2024-10-08 09:23:42.273612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.708 [2024-10-08 09:23:42.273718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:50.708 [2024-10-08 09:23:42.273732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.874 ms 00:17:50.708 [2024-10-08 09:23:42.273741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.708 [2024-10-08 09:23:42.274109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.708 [2024-10-08 09:23:42.274123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:50.708 [2024-10-08 09:23:42.274132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:17:50.708 [2024-10-08 09:23:42.274140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.708 [2024-10-08 09:23:42.306466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.708 [2024-10-08 09:23:42.306572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:50.708 [2024-10-08 09:23:42.306624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.708 [2024-10-08 09:23:42.306647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.708 [2024-10-08 09:23:42.306783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.708 [2024-10-08 09:23:42.306810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:50.708 [2024-10-08 09:23:42.306856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.708 [2024-10-08 09:23:42.306878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.708 [2024-10-08 09:23:42.306941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.708 [2024-10-08 09:23:42.307021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:50.708 [2024-10-08 09:23:42.307044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.708 [2024-10-08 09:23:42.307064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.708 [2024-10-08 09:23:42.307122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.708 [2024-10-08 09:23:42.307145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:50.708 [2024-10-08 09:23:42.307164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.708 [2024-10-08 09:23:42.307183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.708 [2024-10-08 09:23:42.387591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.708 [2024-10-08 09:23:42.387729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:50.708 [2024-10-08 09:23:42.387782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.708 [2024-10-08 09:23:42.387804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.966 [2024-10-08 09:23:42.454817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.966 [2024-10-08 09:23:42.454864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.966 [2024-10-08 09:23:42.454876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.966 [2024-10-08 09:23:42.454884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.966 [2024-10-08 09:23:42.454967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.966 [2024-10-08 09:23:42.454978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.966 [2024-10-08 09:23:42.454986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.966 [2024-10-08 09:23:42.454998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.966 [2024-10-08 09:23:42.455028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.966 [2024-10-08 09:23:42.455037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.966 [2024-10-08 09:23:42.455046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.966 [2024-10-08 09:23:42.455053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.966 [2024-10-08 09:23:42.455142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.966 [2024-10-08 09:23:42.455152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.966 [2024-10-08 09:23:42.455161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.966 [2024-10-08 09:23:42.455171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.966 [2024-10-08 09:23:42.455204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.966 [2024-10-08 09:23:42.455214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:50.966 [2024-10-08 09:23:42.455222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.966 [2024-10-08 09:23:42.455230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.966 [2024-10-08 09:23:42.455271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.967 [2024-10-08 09:23:42.455279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.967 [2024-10-08 09:23:42.455288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.967 [2024-10-08 09:23:42.455296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.967 [2024-10-08 09:23:42.455346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.967 [2024-10-08 09:23:42.455355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.967 [2024-10-08 09:23:42.455364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.967 [2024-10-08 09:23:42.455541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.967 [2024-10-08 09:23:42.455730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.260 ms, result 0 00:17:51.900 00:17:51.900 00:17:51.900 09:23:43 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:52.157 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:52.157 09:23:43 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:52.157 09:23:43 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:52.157 09:23:43 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:52.157 09:23:43 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:52.157 09:23:43 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:52.415 09:23:43 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:52.415 09:23:43 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74362 00:17:52.415 09:23:43 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74362 ']' 00:17:52.415 09:23:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74362 00:17:52.415 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74362) - No such process 00:17:52.415 Process with pid 74362 is not found 00:17:52.415 09:23:43 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 74362 is not found' 00:17:52.415 ************************************ 00:17:52.415 END TEST ftl_trim 00:17:52.415 ************************************ 00:17:52.415 00:17:52.415 real 0m51.059s 00:17:52.415 user 1m7.495s 00:17:52.415 sys 0m13.559s 00:17:52.415 09:23:43 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.415 09:23:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:52.415 09:23:43 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:52.415 09:23:43 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:52.415 09:23:43 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.415 09:23:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:52.415 ************************************ 00:17:52.415 START TEST ftl_restore 00:17:52.415 ************************************ 00:17:52.415 09:23:43 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:52.415 * Looking for test storage... 00:17:52.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.415 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:52.415 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:17:52.415 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:52.415 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.415 09:23:44 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:52.415 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.415 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:52.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.415 --rc genhtml_branch_coverage=1 00:17:52.415 --rc genhtml_function_coverage=1 00:17:52.415 --rc genhtml_legend=1 00:17:52.415 --rc geninfo_all_blocks=1 00:17:52.415 --rc geninfo_unexecuted_blocks=1 00:17:52.415 00:17:52.415 ' 00:17:52.415 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:52.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.415 --rc genhtml_branch_coverage=1 00:17:52.415 --rc genhtml_function_coverage=1 00:17:52.415 --rc genhtml_legend=1 00:17:52.415 --rc geninfo_all_blocks=1 00:17:52.415 --rc geninfo_unexecuted_blocks=1 00:17:52.415 00:17:52.415 ' 00:17:52.415 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:52.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.415 --rc genhtml_branch_coverage=1 00:17:52.415 --rc genhtml_function_coverage=1 00:17:52.415 --rc genhtml_legend=1 00:17:52.415 --rc geninfo_all_blocks=1 00:17:52.415 --rc geninfo_unexecuted_blocks=1 00:17:52.415 00:17:52.415 ' 00:17:52.415 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:52.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.415 --rc genhtml_branch_coverage=1 00:17:52.415 --rc genhtml_function_coverage=1 00:17:52.415 --rc genhtml_legend=1 00:17:52.415 --rc geninfo_all_blocks=1 00:17:52.415 --rc geninfo_unexecuted_blocks=1 00:17:52.415 00:17:52.415 ' 00:17:52.416 09:23:44 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:52.416 09:23:44 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:52.416 09:23:44 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.416 09:23:44 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.416 09:23:44 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.EIuT34dFUL 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74581 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74581 00:17:52.674 09:23:44 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.674 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 74581 ']' 00:17:52.674 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.674 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.674 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.674 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.674 09:23:44 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:52.674 [2024-10-08 09:23:44.185656] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:17:52.674 [2024-10-08 09:23:44.185963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74581 ] 00:17:52.674 [2024-10-08 09:23:44.333236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.932 [2024-10-08 09:23:44.540043] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.499 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.499 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:53.757 09:23:45 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:53.757 09:23:45 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:53.757 09:23:45 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:53.757 09:23:45 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:53.757 09:23:45 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:53.757 09:23:45 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:53.757 09:23:45 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:53.757 09:23:45 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:53.757 09:23:45 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:53.757 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:53.757 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:53.757 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:53.757 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:53.757 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:54.015 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:54.015 { 00:17:54.015 "name": "nvme0n1", 00:17:54.015 "aliases": [ 00:17:54.015 "e5df0859-7f89-4a16-bee1-746307283865" 00:17:54.015 ], 00:17:54.015 "product_name": "NVMe disk", 00:17:54.015 "block_size": 4096, 00:17:54.015 "num_blocks": 1310720, 00:17:54.015 "uuid": "e5df0859-7f89-4a16-bee1-746307283865", 00:17:54.015 "numa_id": -1, 00:17:54.015 "assigned_rate_limits": { 00:17:54.015 "rw_ios_per_sec": 0, 00:17:54.015 "rw_mbytes_per_sec": 0, 00:17:54.015 "r_mbytes_per_sec": 0, 00:17:54.015 "w_mbytes_per_sec": 0 00:17:54.015 }, 00:17:54.015 "claimed": true, 00:17:54.015 "claim_type": "read_many_write_one", 00:17:54.015 "zoned": false, 00:17:54.015 "supported_io_types": { 00:17:54.015 "read": true, 00:17:54.015 "write": true, 00:17:54.015 "unmap": true, 00:17:54.015 "flush": true, 00:17:54.015 "reset": true, 00:17:54.015 "nvme_admin": true, 00:17:54.015 "nvme_io": true, 00:17:54.015 "nvme_io_md": false, 00:17:54.015 "write_zeroes": true, 00:17:54.015 "zcopy": false, 00:17:54.015 "get_zone_info": false, 00:17:54.015 "zone_management": false, 00:17:54.015 "zone_append": false, 00:17:54.015 "compare": true, 00:17:54.015 "compare_and_write": false, 00:17:54.015 "abort": true, 00:17:54.015 "seek_hole": false, 00:17:54.015 "seek_data": false, 00:17:54.015 "copy": true, 00:17:54.015 "nvme_iov_md": false 00:17:54.015 }, 00:17:54.015 "driver_specific": { 00:17:54.015 "nvme": [ 00:17:54.015 { 00:17:54.015 "pci_address": "0000:00:11.0", 00:17:54.015 "trid": { 00:17:54.015 "trtype": "PCIe", 00:17:54.015 "traddr": "0000:00:11.0" 00:17:54.015 }, 00:17:54.015 "ctrlr_data": { 00:17:54.015 "cntlid": 0, 00:17:54.015 "vendor_id": "0x1b36", 00:17:54.015 "model_number": "QEMU NVMe Ctrl", 00:17:54.015 "serial_number": "12341", 00:17:54.015 "firmware_revision": "8.0.0", 00:17:54.015 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:54.015 "oacs": { 00:17:54.015 "security": 0, 00:17:54.015 "format": 1, 00:17:54.015 "firmware": 0, 00:17:54.015 "ns_manage": 1 00:17:54.015 }, 00:17:54.015 "multi_ctrlr": false, 00:17:54.015 "ana_reporting": false 00:17:54.015 }, 00:17:54.015 "vs": { 00:17:54.015 "nvme_version": "1.4" 00:17:54.015 }, 00:17:54.015 "ns_data": { 00:17:54.015 "id": 1, 00:17:54.015 "can_share": false 00:17:54.015 } 00:17:54.015 } 00:17:54.015 ], 00:17:54.015 "mp_policy": "active_passive" 00:17:54.015 } 00:17:54.015 } 00:17:54.015 ]' 00:17:54.015 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:54.015 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:54.015 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:54.015 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:54.015 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:54.015 09:23:45 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:17:54.015 09:23:45 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:54.015 09:23:45 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:54.015 09:23:45 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:54.015 09:23:45 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:54.015 09:23:45 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:54.282 09:23:45 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=e67d8545-2338-4b79-9ecc-1d83b9b9a784 00:17:54.282 09:23:45 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:54.282 09:23:45 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e67d8545-2338-4b79-9ecc-1d83b9b9a784 00:17:54.579 09:23:46 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:54.837 09:23:46 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=9df07b85-6787-438f-a02d-a56ab67e7b44 00:17:54.837 09:23:46 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9df07b85-6787-438f-a02d-a56ab67e7b44 00:17:54.837 09:23:46 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0256bf55-f8c3-47cf-b814-338233f36400 00:17:54.837 09:23:46 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:54.837 09:23:46 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0256bf55-f8c3-47cf-b814-338233f36400 00:17:54.837 09:23:46 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:54.837 09:23:46 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:54.837 09:23:46 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0256bf55-f8c3-47cf-b814-338233f36400 00:17:54.837 09:23:46 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:54.837 09:23:46 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0256bf55-f8c3-47cf-b814-338233f36400 00:17:54.837 09:23:46 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0256bf55-f8c3-47cf-b814-338233f36400 00:17:54.837 09:23:46 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:54.837 09:23:46 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:54.837 09:23:46 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:54.837 09:23:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0256bf55-f8c3-47cf-b814-338233f36400 00:17:55.095 09:23:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:55.095 { 00:17:55.095 "name": "0256bf55-f8c3-47cf-b814-338233f36400", 00:17:55.095 "aliases": [ 00:17:55.095 "lvs/nvme0n1p0" 00:17:55.095 ], 00:17:55.095 "product_name": "Logical Volume", 00:17:55.095 "block_size": 4096, 00:17:55.095 "num_blocks": 26476544, 00:17:55.095 "uuid": "0256bf55-f8c3-47cf-b814-338233f36400", 00:17:55.095 "assigned_rate_limits": { 00:17:55.095 "rw_ios_per_sec": 0, 00:17:55.095 "rw_mbytes_per_sec": 0, 00:17:55.095 "r_mbytes_per_sec": 0, 00:17:55.095 "w_mbytes_per_sec": 0 00:17:55.095 }, 00:17:55.095 "claimed": false, 00:17:55.095 "zoned": false, 00:17:55.095 "supported_io_types": { 00:17:55.095 "read": true, 00:17:55.095 "write": true, 00:17:55.095 "unmap": true, 00:17:55.095 "flush": false, 00:17:55.095 "reset": true, 00:17:55.095 "nvme_admin": false, 00:17:55.095 "nvme_io": false, 00:17:55.095 "nvme_io_md": false, 00:17:55.095 "write_zeroes": true, 00:17:55.095 "zcopy": false, 00:17:55.095 "get_zone_info": false, 00:17:55.095 "zone_management": false, 00:17:55.095 "zone_append": false, 00:17:55.095 "compare": false, 00:17:55.095 "compare_and_write": false, 00:17:55.095 "abort": false, 00:17:55.095 "seek_hole": true, 00:17:55.095 "seek_data": true, 00:17:55.095 "copy": false, 00:17:55.095 "nvme_iov_md": false 00:17:55.095 }, 00:17:55.095 "driver_specific": { 00:17:55.095 "lvol": { 00:17:55.095 "lvol_store_uuid": "9df07b85-6787-438f-a02d-a56ab67e7b44", 00:17:55.095 "base_bdev": "nvme0n1", 00:17:55.095 "thin_provision": true, 00:17:55.095 "num_allocated_clusters": 0, 00:17:55.095 "snapshot": false, 00:17:55.095 "clone": false, 00:17:55.095 "esnap_clone": false 00:17:55.095 } 00:17:55.095 } 00:17:55.095 } 00:17:55.095 ]' 00:17:55.095 09:23:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:55.095 09:23:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:55.095 09:23:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:55.354 09:23:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:55.354 09:23:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:55.354 09:23:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:55.354 09:23:46 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:55.354 09:23:46 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:55.354 09:23:46 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:55.613 09:23:47 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:55.613 09:23:47 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:55.613 09:23:47 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0256bf55-f8c3-47cf-b814-338233f36400 00:17:55.613 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0256bf55-f8c3-47cf-b814-338233f36400 00:17:55.613 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:55.613 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:55.613 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:55.613 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0256bf55-f8c3-47cf-b814-338233f36400 00:17:55.613 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:55.613 { 00:17:55.613 "name": "0256bf55-f8c3-47cf-b814-338233f36400", 00:17:55.613 "aliases": [ 00:17:55.613 "lvs/nvme0n1p0" 00:17:55.613 ], 00:17:55.613 "product_name": "Logical Volume", 00:17:55.613 "block_size": 4096, 00:17:55.613 "num_blocks": 26476544, 00:17:55.613 "uuid": "0256bf55-f8c3-47cf-b814-338233f36400", 00:17:55.613 "assigned_rate_limits": { 00:17:55.613 "rw_ios_per_sec": 0, 00:17:55.613 "rw_mbytes_per_sec": 0, 00:17:55.613 "r_mbytes_per_sec": 0, 00:17:55.613 "w_mbytes_per_sec": 0 00:17:55.613 }, 00:17:55.613 "claimed": false, 00:17:55.613 "zoned": false, 00:17:55.613 "supported_io_types": { 00:17:55.613 "read": true, 00:17:55.613 "write": true, 00:17:55.613 "unmap": true, 00:17:55.613 "flush": false, 00:17:55.613 "reset": true, 00:17:55.613 "nvme_admin": false, 00:17:55.613 "nvme_io": false, 00:17:55.613 "nvme_io_md": false, 00:17:55.613 "write_zeroes": true, 00:17:55.613 "zcopy": false, 00:17:55.613 "get_zone_info": false, 00:17:55.613 "zone_management": false, 00:17:55.613 "zone_append": false, 00:17:55.613 "compare": false, 00:17:55.613 "compare_and_write": false, 00:17:55.613 "abort": false, 00:17:55.613 "seek_hole": true, 00:17:55.613 "seek_data": true, 00:17:55.613 "copy": false, 00:17:55.613 "nvme_iov_md": false 00:17:55.613 }, 00:17:55.613 "driver_specific": { 00:17:55.613 "lvol": { 00:17:55.613 "lvol_store_uuid": "9df07b85-6787-438f-a02d-a56ab67e7b44", 00:17:55.613 "base_bdev": "nvme0n1", 00:17:55.613 "thin_provision": true, 00:17:55.613 "num_allocated_clusters": 0, 00:17:55.613 "snapshot": false, 00:17:55.613 "clone": false, 00:17:55.613 "esnap_clone": false 00:17:55.613 } 00:17:55.613 } 00:17:55.613 } 00:17:55.613 ]' 00:17:55.613 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:55.871 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:55.871 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:55.871 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:55.871 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:55.871 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:55.871 09:23:47 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:55.871 09:23:47 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:55.871 09:23:47 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:55.871 09:23:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0256bf55-f8c3-47cf-b814-338233f36400 00:17:55.871 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0256bf55-f8c3-47cf-b814-338233f36400 00:17:55.871 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:55.871 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:55.871 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:55.871 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0256bf55-f8c3-47cf-b814-338233f36400 00:17:56.129 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:56.129 { 00:17:56.129 "name": "0256bf55-f8c3-47cf-b814-338233f36400", 00:17:56.129 "aliases": [ 00:17:56.129 "lvs/nvme0n1p0" 00:17:56.129 ], 00:17:56.129 "product_name": "Logical Volume", 00:17:56.129 "block_size": 4096, 00:17:56.129 "num_blocks": 26476544, 00:17:56.130 "uuid": "0256bf55-f8c3-47cf-b814-338233f36400", 00:17:56.130 "assigned_rate_limits": { 00:17:56.130 "rw_ios_per_sec": 0, 00:17:56.130 "rw_mbytes_per_sec": 0, 00:17:56.130 "r_mbytes_per_sec": 0, 00:17:56.130 "w_mbytes_per_sec": 0 00:17:56.130 }, 00:17:56.130 "claimed": false, 00:17:56.130 "zoned": false, 00:17:56.130 "supported_io_types": { 00:17:56.130 "read": true, 00:17:56.130 "write": true, 00:17:56.130 "unmap": true, 00:17:56.130 "flush": false, 00:17:56.130 "reset": true, 00:17:56.130 "nvme_admin": false, 00:17:56.130 "nvme_io": false, 00:17:56.130 "nvme_io_md": false, 00:17:56.130 "write_zeroes": true, 00:17:56.130 "zcopy": false, 00:17:56.130 "get_zone_info": false, 00:17:56.130 "zone_management": false, 00:17:56.130 "zone_append": false, 00:17:56.130 "compare": false, 00:17:56.130 "compare_and_write": false, 00:17:56.130 "abort": false, 00:17:56.130 "seek_hole": true, 00:17:56.130 "seek_data": true, 00:17:56.130 "copy": false, 00:17:56.130 "nvme_iov_md": false 00:17:56.130 }, 00:17:56.130 "driver_specific": { 00:17:56.130 "lvol": { 00:17:56.130 "lvol_store_uuid": "9df07b85-6787-438f-a02d-a56ab67e7b44", 00:17:56.130 "base_bdev": "nvme0n1", 00:17:56.130 "thin_provision": true, 00:17:56.130 "num_allocated_clusters": 0, 00:17:56.130 "snapshot": false, 00:17:56.130 "clone": false, 00:17:56.130 "esnap_clone": false 00:17:56.130 } 00:17:56.130 } 00:17:56.130 } 00:17:56.130 ]' 00:17:56.130 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:56.130 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:56.130 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:56.130 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:56.130 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:56.130 09:23:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:56.389 09:23:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:56.389 09:23:47 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0256bf55-f8c3-47cf-b814-338233f36400 --l2p_dram_limit 10' 00:17:56.389 09:23:47 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:56.389 09:23:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:56.389 09:23:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:56.389 09:23:47 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:56.389 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:56.389 09:23:47 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0256bf55-f8c3-47cf-b814-338233f36400 --l2p_dram_limit 10 -c nvc0n1p0 00:17:56.389 [2024-10-08 09:23:47.995301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.389 [2024-10-08 09:23:47.995350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:56.389 [2024-10-08 09:23:47.995365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:56.389 [2024-10-08 09:23:47.995379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.389 [2024-10-08 09:23:47.995447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.389 [2024-10-08 09:23:47.995457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.389 [2024-10-08 09:23:47.995466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:56.389 [2024-10-08 09:23:47.995472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.389 [2024-10-08 09:23:47.995494] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:56.389 [2024-10-08 09:23:47.996117] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:56.389 [2024-10-08 09:23:47.996190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.389 [2024-10-08 09:23:47.996198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.389 [2024-10-08 09:23:47.996207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:17:56.389 [2024-10-08 09:23:47.996215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.389 [2024-10-08 09:23:47.996279] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ebde3831-2620-4969-b6d1-8149682e8f6d 00:17:56.389 [2024-10-08 09:23:47.997589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.389 [2024-10-08 09:23:47.997624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:56.389 [2024-10-08 09:23:47.997633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:56.389 [2024-10-08 09:23:47.997641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.389 [2024-10-08 09:23:48.004490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.389 [2024-10-08 09:23:48.004520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.389 [2024-10-08 09:23:48.004528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.809 ms 00:17:56.389 [2024-10-08 09:23:48.004537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.389 [2024-10-08 09:23:48.004609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.389 [2024-10-08 09:23:48.004619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.389 [2024-10-08 09:23:48.004626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:56.389 [2024-10-08 09:23:48.004636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.389 [2024-10-08 09:23:48.004684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.389 [2024-10-08 09:23:48.004695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:56.389 [2024-10-08 09:23:48.004702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:56.389 [2024-10-08 09:23:48.004709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.389 [2024-10-08 09:23:48.004727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:56.389 [2024-10-08 09:23:48.008023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.389 [2024-10-08 09:23:48.008048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.389 [2024-10-08 09:23:48.008057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:17:56.389 [2024-10-08 09:23:48.008063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.389 [2024-10-08 09:23:48.008094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.389 [2024-10-08 09:23:48.008101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:56.389 [2024-10-08 09:23:48.008109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:56.389 [2024-10-08 09:23:48.008117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.389 [2024-10-08 09:23:48.008132] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:56.389 [2024-10-08 09:23:48.008248] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:56.389 [2024-10-08 09:23:48.008263] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:56.389 [2024-10-08 09:23:48.008272] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:56.389 [2024-10-08 09:23:48.008283] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:56.389 [2024-10-08 09:23:48.008290] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:56.389 [2024-10-08 09:23:48.008299] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:56.389 [2024-10-08 09:23:48.008305] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:56.389 [2024-10-08 09:23:48.008312] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:56.389 [2024-10-08 09:23:48.008318] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:56.389 [2024-10-08 09:23:48.008327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.389 [2024-10-08 09:23:48.008338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:56.389 [2024-10-08 09:23:48.008346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:56.389 [2024-10-08 09:23:48.008352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.389 [2024-10-08 09:23:48.008426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.389 [2024-10-08 09:23:48.008437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:56.389 [2024-10-08 09:23:48.008445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:56.389 [2024-10-08 09:23:48.008450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.389 [2024-10-08 09:23:48.008529] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:56.389 [2024-10-08 09:23:48.008537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:56.389 [2024-10-08 09:23:48.008545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.389 [2024-10-08 09:23:48.008551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.389 [2024-10-08 09:23:48.008559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:56.389 [2024-10-08 09:23:48.008565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:56.389 [2024-10-08 09:23:48.008572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:56.389 [2024-10-08 09:23:48.008577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:56.389 [2024-10-08 09:23:48.008584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:56.389 [2024-10-08 09:23:48.008589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.389 [2024-10-08 09:23:48.008595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:56.389 [2024-10-08 09:23:48.008602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:56.389 [2024-10-08 09:23:48.008609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.389 [2024-10-08 09:23:48.008615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:56.389 [2024-10-08 09:23:48.008622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:56.389 [2024-10-08 09:23:48.008627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.389 [2024-10-08 09:23:48.008636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:56.389 [2024-10-08 09:23:48.008642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:56.389 [2024-10-08 09:23:48.008649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.389 [2024-10-08 09:23:48.008654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:56.389 [2024-10-08 09:23:48.008663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:56.389 [2024-10-08 09:23:48.008669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.389 [2024-10-08 09:23:48.008677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:56.389 [2024-10-08 09:23:48.008682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:56.389 [2024-10-08 09:23:48.008688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.389 [2024-10-08 09:23:48.008693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:56.389 [2024-10-08 09:23:48.008700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:56.389 [2024-10-08 09:23:48.008706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.389 [2024-10-08 09:23:48.008712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:56.389 [2024-10-08 09:23:48.008717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:56.389 [2024-10-08 09:23:48.008724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.389 [2024-10-08 09:23:48.008729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:56.389 [2024-10-08 09:23:48.008737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:56.389 [2024-10-08 09:23:48.008743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.390 [2024-10-08 09:23:48.008750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:56.390 [2024-10-08 09:23:48.008755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:56.390 [2024-10-08 09:23:48.008762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.390 [2024-10-08 09:23:48.008767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:56.390 [2024-10-08 09:23:48.008773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:56.390 [2024-10-08 09:23:48.008779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.390 [2024-10-08 09:23:48.008785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:56.390 [2024-10-08 09:23:48.008790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:56.390 [2024-10-08 09:23:48.008796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.390 [2024-10-08 09:23:48.008801] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:56.390 [2024-10-08 09:23:48.008809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:56.390 [2024-10-08 09:23:48.008817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.390 [2024-10-08 09:23:48.008824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.390 [2024-10-08 09:23:48.008831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:56.390 [2024-10-08 09:23:48.008839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:56.390 [2024-10-08 09:23:48.008845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:56.390 [2024-10-08 09:23:48.008852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:56.390 [2024-10-08 09:23:48.008856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:56.390 [2024-10-08 09:23:48.008864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:56.390 [2024-10-08 09:23:48.008872] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:56.390 [2024-10-08 09:23:48.008882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.390 [2024-10-08 09:23:48.008889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:56.390 [2024-10-08 09:23:48.008896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:56.390 [2024-10-08 09:23:48.008902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:56.390 [2024-10-08 09:23:48.008909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:56.390 [2024-10-08 09:23:48.008914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:56.390 [2024-10-08 09:23:48.008921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:56.390 [2024-10-08 09:23:48.008926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:56.390 [2024-10-08 09:23:48.008934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:56.390 [2024-10-08 09:23:48.008939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:56.390 [2024-10-08 09:23:48.008948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:56.390 [2024-10-08 09:23:48.008953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:56.390 [2024-10-08 09:23:48.008960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:56.390 [2024-10-08 09:23:48.008965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:56.390 [2024-10-08 09:23:48.008973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:56.390 [2024-10-08 09:23:48.008978] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:56.390 [2024-10-08 09:23:48.008987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.390 [2024-10-08 09:23:48.008993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:56.390 [2024-10-08 09:23:48.009001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:56.390 [2024-10-08 09:23:48.009006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:56.390 [2024-10-08 09:23:48.009018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:56.390 [2024-10-08 09:23:48.009024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.390 [2024-10-08 09:23:48.009031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:56.390 [2024-10-08 09:23:48.009037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:17:56.390 [2024-10-08 09:23:48.009044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.390 [2024-10-08 09:23:48.009089] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:56.390 [2024-10-08 09:23:48.009101] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:58.918 [2024-10-08 09:23:50.428186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.918 [2024-10-08 09:23:50.428263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:58.918 [2024-10-08 09:23:50.428279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2419.085 ms 00:17:58.918 [2024-10-08 09:23:50.428290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.918 [2024-10-08 09:23:50.456494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.918 [2024-10-08 09:23:50.456550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.918 [2024-10-08 09:23:50.456565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.966 ms 00:17:58.918 [2024-10-08 09:23:50.456575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.918 [2024-10-08 09:23:50.456718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.918 [2024-10-08 09:23:50.456732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:58.918 [2024-10-08 09:23:50.456741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:58.918 [2024-10-08 09:23:50.456754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.918 [2024-10-08 09:23:50.498250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.918 [2024-10-08 09:23:50.498570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.918 [2024-10-08 09:23:50.498606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.454 ms 00:17:58.918 [2024-10-08 09:23:50.498622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.918 [2024-10-08 09:23:50.498683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.918 [2024-10-08 09:23:50.498700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.918 [2024-10-08 09:23:50.498714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:58.918 [2024-10-08 09:23:50.498737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.918 [2024-10-08 09:23:50.499262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.918 [2024-10-08 09:23:50.499290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.918 [2024-10-08 09:23:50.499303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:17:58.918 [2024-10-08 09:23:50.499321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.918 [2024-10-08 09:23:50.499510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.918 [2024-10-08 09:23:50.499527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.918 [2024-10-08 09:23:50.499540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:17:58.918 [2024-10-08 09:23:50.499557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.918 [2024-10-08 09:23:50.516571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.918 [2024-10-08 09:23:50.516621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.918 [2024-10-08 09:23:50.516631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.989 ms 00:17:58.918 [2024-10-08 09:23:50.516641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.918 [2024-10-08 09:23:50.528845] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:58.918 [2024-10-08 09:23:50.532098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.918 [2024-10-08 09:23:50.532127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:58.918 [2024-10-08 09:23:50.532140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.373 ms 00:17:58.918 [2024-10-08 09:23:50.532151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.918 [2024-10-08 09:23:50.596105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.918 [2024-10-08 09:23:50.596155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:58.918 [2024-10-08 09:23:50.596174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.924 ms 00:17:58.918 [2024-10-08 09:23:50.596183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.918 [2024-10-08 09:23:50.596375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.918 [2024-10-08 09:23:50.596401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:58.918 [2024-10-08 09:23:50.596415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:17:58.918 [2024-10-08 09:23:50.596423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-10-08 09:23:50.619759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.177 [2024-10-08 09:23:50.619799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:59.177 [2024-10-08 09:23:50.619812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.288 ms 00:17:59.177 [2024-10-08 09:23:50.619820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-10-08 09:23:50.642226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.177 [2024-10-08 09:23:50.642451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:59.177 [2024-10-08 09:23:50.642474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.365 ms 00:17:59.177 [2024-10-08 09:23:50.642482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-10-08 09:23:50.643267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.177 [2024-10-08 09:23:50.643302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:59.177 [2024-10-08 09:23:50.643315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:17:59.177 [2024-10-08 09:23:50.643324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-10-08 09:23:50.714310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.177 [2024-10-08 09:23:50.714533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:59.177 [2024-10-08 09:23:50.714560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.933 ms 00:17:59.177 [2024-10-08 09:23:50.714572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-10-08 09:23:50.739565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.177 [2024-10-08 09:23:50.739608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:59.177 [2024-10-08 09:23:50.739622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.922 ms 00:17:59.177 [2024-10-08 09:23:50.739630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-10-08 09:23:50.762765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.177 [2024-10-08 09:23:50.762804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:59.177 [2024-10-08 09:23:50.762817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.105 ms 00:17:59.177 [2024-10-08 09:23:50.762824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-10-08 09:23:50.786522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.177 [2024-10-08 09:23:50.786559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:59.177 [2024-10-08 09:23:50.786572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.671 ms 00:17:59.177 [2024-10-08 09:23:50.786580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-10-08 09:23:50.786610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.177 [2024-10-08 09:23:50.786618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:59.177 [2024-10-08 09:23:50.786631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:59.177 [2024-10-08 09:23:50.786642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-10-08 09:23:50.786727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.177 [2024-10-08 09:23:50.786738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:59.177 [2024-10-08 09:23:50.786748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:59.177 [2024-10-08 09:23:50.786755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.177 [2024-10-08 09:23:50.787736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2791.948 ms, result 0 00:17:59.177 { 00:17:59.177 "name": "ftl0", 00:17:59.177 "uuid": "ebde3831-2620-4969-b6d1-8149682e8f6d" 00:17:59.177 } 00:17:59.177 09:23:50 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:59.177 09:23:50 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:59.436 09:23:51 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:59.436 09:23:51 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:59.695 [2024-10-08 09:23:51.203253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.695 [2024-10-08 09:23:51.203314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:59.695 [2024-10-08 09:23:51.203328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:59.695 [2024-10-08 09:23:51.203339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.695 [2024-10-08 09:23:51.203364] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:59.695 [2024-10-08 09:23:51.206152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.695 [2024-10-08 09:23:51.206367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:59.695 [2024-10-08 09:23:51.206411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.761 ms 00:17:59.695 [2024-10-08 09:23:51.206420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.695 [2024-10-08 09:23:51.206691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.695 [2024-10-08 09:23:51.206700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:59.695 [2024-10-08 09:23:51.206711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:17:59.695 [2024-10-08 09:23:51.206718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.695 [2024-10-08 09:23:51.209957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.695 [2024-10-08 09:23:51.209981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:59.695 [2024-10-08 09:23:51.209993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:17:59.695 [2024-10-08 09:23:51.210003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.695 [2024-10-08 09:23:51.216270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.695 [2024-10-08 09:23:51.216299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:59.695 [2024-10-08 09:23:51.216312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.247 ms 00:17:59.695 [2024-10-08 09:23:51.216320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.695 [2024-10-08 09:23:51.240669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.695 [2024-10-08 09:23:51.240819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:59.695 [2024-10-08 09:23:51.240840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.279 ms 00:17:59.695 [2024-10-08 09:23:51.240847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.695 [2024-10-08 09:23:51.256486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.695 [2024-10-08 09:23:51.256520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:59.695 [2024-10-08 09:23:51.256534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.595 ms 00:17:59.695 [2024-10-08 09:23:51.256543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.695 [2024-10-08 09:23:51.256693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.695 [2024-10-08 09:23:51.256707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:59.695 [2024-10-08 09:23:51.256719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:17:59.695 [2024-10-08 09:23:51.256727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.695 [2024-10-08 09:23:51.279281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.696 [2024-10-08 09:23:51.279312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:59.696 [2024-10-08 09:23:51.279324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.533 ms 00:17:59.696 [2024-10-08 09:23:51.279332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.696 [2024-10-08 09:23:51.301934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.696 [2024-10-08 09:23:51.302048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:59.696 [2024-10-08 09:23:51.302066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.397 ms 00:17:59.696 [2024-10-08 09:23:51.302074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.696 [2024-10-08 09:23:51.324415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.696 [2024-10-08 09:23:51.324446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:59.696 [2024-10-08 09:23:51.324458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.293 ms 00:17:59.696 [2024-10-08 09:23:51.324465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.696 [2024-10-08 09:23:51.346941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.696 [2024-10-08 09:23:51.346974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:59.696 [2024-10-08 09:23:51.346986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.400 ms 00:17:59.696 [2024-10-08 09:23:51.346993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.696 [2024-10-08 09:23:51.347029] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:59.696 [2024-10-08 09:23:51.347045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:59.696 [2024-10-08 09:23:51.347980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.347987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.347997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:59.697 [2024-10-08 09:23:51.348162] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:59.697 [2024-10-08 09:23:51.348173] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ebde3831-2620-4969-b6d1-8149682e8f6d 00:17:59.697 [2024-10-08 09:23:51.348184] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:59.697 [2024-10-08 09:23:51.348195] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:59.697 [2024-10-08 09:23:51.348202] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:59.697 [2024-10-08 09:23:51.348212] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:59.697 [2024-10-08 09:23:51.348219] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:59.697 [2024-10-08 09:23:51.348228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:59.697 [2024-10-08 09:23:51.348238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:59.697 [2024-10-08 09:23:51.348246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:59.697 [2024-10-08 09:23:51.348252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:59.697 [2024-10-08 09:23:51.348262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.697 [2024-10-08 09:23:51.348269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:59.697 [2024-10-08 09:23:51.348279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:17:59.697 [2024-10-08 09:23:51.348286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.697 [2024-10-08 09:23:51.361039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.697 [2024-10-08 09:23:51.361070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:59.697 [2024-10-08 09:23:51.361082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.717 ms 00:17:59.697 [2024-10-08 09:23:51.361091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.697 [2024-10-08 09:23:51.361490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.697 [2024-10-08 09:23:51.361507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:59.697 [2024-10-08 09:23:51.361518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:17:59.697 [2024-10-08 09:23:51.361526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.400631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.956 [2024-10-08 09:23:51.400671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.956 [2024-10-08 09:23:51.400685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.956 [2024-10-08 09:23:51.400694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.400763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.956 [2024-10-08 09:23:51.400771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.956 [2024-10-08 09:23:51.400782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.956 [2024-10-08 09:23:51.400789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.400864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.956 [2024-10-08 09:23:51.400875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.956 [2024-10-08 09:23:51.400885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.956 [2024-10-08 09:23:51.400893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.400917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.956 [2024-10-08 09:23:51.400925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.956 [2024-10-08 09:23:51.400934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.956 [2024-10-08 09:23:51.400942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.482351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.956 [2024-10-08 09:23:51.482424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.956 [2024-10-08 09:23:51.482439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.956 [2024-10-08 09:23:51.482447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.548818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.956 [2024-10-08 09:23:51.548875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.956 [2024-10-08 09:23:51.548889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.956 [2024-10-08 09:23:51.548898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.549010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.956 [2024-10-08 09:23:51.549020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:59.956 [2024-10-08 09:23:51.549031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.956 [2024-10-08 09:23:51.549038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.549091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.956 [2024-10-08 09:23:51.549105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:59.956 [2024-10-08 09:23:51.549115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.956 [2024-10-08 09:23:51.549123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.549223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.956 [2024-10-08 09:23:51.549240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:59.956 [2024-10-08 09:23:51.549251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.956 [2024-10-08 09:23:51.549258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.549294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.956 [2024-10-08 09:23:51.549304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:59.956 [2024-10-08 09:23:51.549317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.956 [2024-10-08 09:23:51.549325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.549368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.956 [2024-10-08 09:23:51.549382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:59.956 [2024-10-08 09:23:51.549406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.956 [2024-10-08 09:23:51.549414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.549464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.956 [2024-10-08 09:23:51.549477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:59.956 [2024-10-08 09:23:51.549488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.956 [2024-10-08 09:23:51.549495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.956 [2024-10-08 09:23:51.549631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.342 ms, result 0 00:17:59.956 true 00:17:59.956 09:23:51 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74581 00:17:59.956 09:23:51 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74581 ']' 00:17:59.956 09:23:51 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74581 00:17:59.956 09:23:51 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:17:59.956 09:23:51 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:59.956 09:23:51 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74581 00:17:59.956 09:23:51 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:59.956 killing process with pid 74581 00:17:59.956 09:23:51 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:59.956 09:23:51 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74581' 00:17:59.956 09:23:51 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 74581 00:17:59.956 09:23:51 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 74581 00:18:04.141 09:23:55 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:08.326 262144+0 records in 00:18:08.326 262144+0 records out 00:18:08.326 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.6919 s, 291 MB/s 00:18:08.326 09:23:59 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:10.224 09:24:01 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:10.224 [2024-10-08 09:24:01.726787] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:10.224 [2024-10-08 09:24:01.726901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74800 ] 00:18:10.224 [2024-10-08 09:24:01.870412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.482 [2024-10-08 09:24:02.047681] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.740 [2024-10-08 09:24:02.276727] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.740 [2024-10-08 09:24:02.276787] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.999 [2024-10-08 09:24:02.430205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.999 [2024-10-08 09:24:02.430263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:10.999 [2024-10-08 09:24:02.430277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:10.999 [2024-10-08 09:24:02.430286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.999 [2024-10-08 09:24:02.430338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.999 [2024-10-08 09:24:02.430349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.999 [2024-10-08 09:24:02.430358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:10.999 [2024-10-08 09:24:02.430366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.999 [2024-10-08 09:24:02.430399] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:10.999 [2024-10-08 09:24:02.431102] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:10.999 [2024-10-08 09:24:02.431125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.999 [2024-10-08 09:24:02.431133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.999 [2024-10-08 09:24:02.431143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:18:10.999 [2024-10-08 09:24:02.431151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.999 [2024-10-08 09:24:02.432533] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:10.999 [2024-10-08 09:24:02.445264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.999 [2024-10-08 09:24:02.445302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:10.999 [2024-10-08 09:24:02.445314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.732 ms 00:18:10.999 [2024-10-08 09:24:02.445324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.999 [2024-10-08 09:24:02.445380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.999 [2024-10-08 09:24:02.445403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:10.999 [2024-10-08 09:24:02.445412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:10.999 [2024-10-08 09:24:02.445420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.999 [2024-10-08 09:24:02.451988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.999 [2024-10-08 09:24:02.452020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.999 [2024-10-08 09:24:02.452030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.506 ms 00:18:10.999 [2024-10-08 09:24:02.452038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.999 [2024-10-08 09:24:02.452109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.999 [2024-10-08 09:24:02.452120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.999 [2024-10-08 09:24:02.452128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:10.999 [2024-10-08 09:24:02.452135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.999 [2024-10-08 09:24:02.452179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.999 [2024-10-08 09:24:02.452190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:10.999 [2024-10-08 09:24:02.452198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:10.999 [2024-10-08 09:24:02.452206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.999 [2024-10-08 09:24:02.452229] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:10.999 [2024-10-08 09:24:02.455723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.999 [2024-10-08 09:24:02.455752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.999 [2024-10-08 09:24:02.455761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.500 ms 00:18:10.999 [2024-10-08 09:24:02.455769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.999 [2024-10-08 09:24:02.455800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.999 [2024-10-08 09:24:02.455809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:10.999 [2024-10-08 09:24:02.455818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:10.999 [2024-10-08 09:24:02.455825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.000 [2024-10-08 09:24:02.455855] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:11.000 [2024-10-08 09:24:02.455875] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:11.000 [2024-10-08 09:24:02.455911] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:11.000 [2024-10-08 09:24:02.455928] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:11.000 [2024-10-08 09:24:02.456033] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:11.000 [2024-10-08 09:24:02.456051] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:11.000 [2024-10-08 09:24:02.456062] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:11.000 [2024-10-08 09:24:02.456077] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:11.000 [2024-10-08 09:24:02.456086] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:11.000 [2024-10-08 09:24:02.456095] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:11.000 [2024-10-08 09:24:02.456103] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:11.000 [2024-10-08 09:24:02.456111] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:11.000 [2024-10-08 09:24:02.456119] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:11.000 [2024-10-08 09:24:02.456126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.000 [2024-10-08 09:24:02.456133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:11.000 [2024-10-08 09:24:02.456141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:18:11.000 [2024-10-08 09:24:02.456148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.000 [2024-10-08 09:24:02.456231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.000 [2024-10-08 09:24:02.456247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:11.000 [2024-10-08 09:24:02.456255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:11.000 [2024-10-08 09:24:02.456262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.000 [2024-10-08 09:24:02.456376] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:11.000 [2024-10-08 09:24:02.456403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:11.000 [2024-10-08 09:24:02.456413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:11.000 [2024-10-08 09:24:02.456421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:11.000 [2024-10-08 09:24:02.456437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:11.000 [2024-10-08 09:24:02.456451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:11.000 [2024-10-08 09:24:02.456459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:11.000 [2024-10-08 09:24:02.456474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:11.000 [2024-10-08 09:24:02.456481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:11.000 [2024-10-08 09:24:02.456489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:11.000 [2024-10-08 09:24:02.456501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:11.000 [2024-10-08 09:24:02.456508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:11.000 [2024-10-08 09:24:02.456515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:11.000 [2024-10-08 09:24:02.456531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:11.000 [2024-10-08 09:24:02.456540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:11.000 [2024-10-08 09:24:02.456553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.000 [2024-10-08 09:24:02.456567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:11.000 [2024-10-08 09:24:02.456573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.000 [2024-10-08 09:24:02.456587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:11.000 [2024-10-08 09:24:02.456593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.000 [2024-10-08 09:24:02.456606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:11.000 [2024-10-08 09:24:02.456613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:11.000 [2024-10-08 09:24:02.456626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:11.000 [2024-10-08 09:24:02.456633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:11.000 [2024-10-08 09:24:02.456647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:11.000 [2024-10-08 09:24:02.456654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:11.000 [2024-10-08 09:24:02.456660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:11.000 [2024-10-08 09:24:02.456667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:11.000 [2024-10-08 09:24:02.456674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:11.000 [2024-10-08 09:24:02.456680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:11.000 [2024-10-08 09:24:02.456694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:11.000 [2024-10-08 09:24:02.456701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456708] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:11.000 [2024-10-08 09:24:02.456715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:11.000 [2024-10-08 09:24:02.456725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:11.000 [2024-10-08 09:24:02.456733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:11.000 [2024-10-08 09:24:02.456740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:11.000 [2024-10-08 09:24:02.456748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:11.000 [2024-10-08 09:24:02.456756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:11.000 [2024-10-08 09:24:02.456764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:11.000 [2024-10-08 09:24:02.456771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:11.000 [2024-10-08 09:24:02.456778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:11.000 [2024-10-08 09:24:02.456786] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:11.000 [2024-10-08 09:24:02.456796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:11.000 [2024-10-08 09:24:02.456805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:11.000 [2024-10-08 09:24:02.456812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:11.000 [2024-10-08 09:24:02.456819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:11.000 [2024-10-08 09:24:02.456826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:11.000 [2024-10-08 09:24:02.456833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:11.000 [2024-10-08 09:24:02.456840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:11.000 [2024-10-08 09:24:02.456847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:11.000 [2024-10-08 09:24:02.456854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:11.000 [2024-10-08 09:24:02.456861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:11.000 [2024-10-08 09:24:02.456868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:11.000 [2024-10-08 09:24:02.456875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:11.000 [2024-10-08 09:24:02.456882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:11.000 [2024-10-08 09:24:02.456889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:11.000 [2024-10-08 09:24:02.456897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:11.000 [2024-10-08 09:24:02.456905] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:11.000 [2024-10-08 09:24:02.456913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:11.000 [2024-10-08 09:24:02.456921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:11.000 [2024-10-08 09:24:02.456928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:11.000 [2024-10-08 09:24:02.456935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:11.000 [2024-10-08 09:24:02.456942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:11.000 [2024-10-08 09:24:02.456950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.000 [2024-10-08 09:24:02.456957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:11.000 [2024-10-08 09:24:02.456964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:18:11.000 [2024-10-08 09:24:02.456971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.000 [2024-10-08 09:24:02.498479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.000 [2024-10-08 09:24:02.498534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:11.000 [2024-10-08 09:24:02.498552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.462 ms 00:18:11.001 [2024-10-08 09:24:02.498564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.498701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.498716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:11.001 [2024-10-08 09:24:02.498728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:11.001 [2024-10-08 09:24:02.498739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.531240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.531275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:11.001 [2024-10-08 09:24:02.531289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.421 ms 00:18:11.001 [2024-10-08 09:24:02.531298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.531336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.531345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.001 [2024-10-08 09:24:02.531354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:11.001 [2024-10-08 09:24:02.531361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.531823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.531848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.001 [2024-10-08 09:24:02.531858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:18:11.001 [2024-10-08 09:24:02.531872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.532008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.532020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.001 [2024-10-08 09:24:02.532028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:18:11.001 [2024-10-08 09:24:02.532036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.545401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.545433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.001 [2024-10-08 09:24:02.545443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.345 ms 00:18:11.001 [2024-10-08 09:24:02.545452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.558287] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:11.001 [2024-10-08 09:24:02.558338] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:11.001 [2024-10-08 09:24:02.558351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.558360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:11.001 [2024-10-08 09:24:02.558369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.805 ms 00:18:11.001 [2024-10-08 09:24:02.558377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.582819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.582854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:11.001 [2024-10-08 09:24:02.582865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.393 ms 00:18:11.001 [2024-10-08 09:24:02.582874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.594150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.594182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:11.001 [2024-10-08 09:24:02.594192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.232 ms 00:18:11.001 [2024-10-08 09:24:02.594199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.605051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.605082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:11.001 [2024-10-08 09:24:02.605093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.819 ms 00:18:11.001 [2024-10-08 09:24:02.605101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.605739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.605764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:11.001 [2024-10-08 09:24:02.605774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:18:11.001 [2024-10-08 09:24:02.605782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.665386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.665477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:11.001 [2024-10-08 09:24:02.665492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.585 ms 00:18:11.001 [2024-10-08 09:24:02.665501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.676546] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:11.001 [2024-10-08 09:24:02.679571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.679601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:11.001 [2024-10-08 09:24:02.679614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.009 ms 00:18:11.001 [2024-10-08 09:24:02.679623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.679738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.679749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:11.001 [2024-10-08 09:24:02.679758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:11.001 [2024-10-08 09:24:02.679767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.679841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.679851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:11.001 [2024-10-08 09:24:02.679860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:11.001 [2024-10-08 09:24:02.679868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.679889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.679901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:11.001 [2024-10-08 09:24:02.679910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:11.001 [2024-10-08 09:24:02.679918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.001 [2024-10-08 09:24:02.679952] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:11.001 [2024-10-08 09:24:02.679962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.001 [2024-10-08 09:24:02.679970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:11.001 [2024-10-08 09:24:02.679978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:11.001 [2024-10-08 09:24:02.679986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.289 [2024-10-08 09:24:02.703495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.289 [2024-10-08 09:24:02.703532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:11.289 [2024-10-08 09:24:02.703544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.488 ms 00:18:11.289 [2024-10-08 09:24:02.703553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.289 [2024-10-08 09:24:02.703628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.289 [2024-10-08 09:24:02.703640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:11.289 [2024-10-08 09:24:02.703650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:11.289 [2024-10-08 09:24:02.703657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.289 [2024-10-08 09:24:02.705071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.367 ms, result 0 00:18:12.237  [2024-10-08T09:24:04.853Z] Copying: 50/1024 [MB] (50 MBps) [2024-10-08T09:24:05.788Z] Copying: 96/1024 [MB] (45 MBps) [2024-10-08T09:24:06.722Z] Copying: 142/1024 [MB] (46 MBps) [2024-10-08T09:24:08.096Z] Copying: 189/1024 [MB] (47 MBps) [2024-10-08T09:24:09.029Z] Copying: 240/1024 [MB] (50 MBps) [2024-10-08T09:24:09.964Z] Copying: 289/1024 [MB] (49 MBps) [2024-10-08T09:24:10.898Z] Copying: 335/1024 [MB] (45 MBps) [2024-10-08T09:24:11.833Z] Copying: 379/1024 [MB] (43 MBps) [2024-10-08T09:24:12.768Z] Copying: 428/1024 [MB] (49 MBps) [2024-10-08T09:24:14.173Z] Copying: 472/1024 [MB] (44 MBps) [2024-10-08T09:24:14.740Z] Copying: 517/1024 [MB] (44 MBps) [2024-10-08T09:24:16.114Z] Copying: 563/1024 [MB] (45 MBps) [2024-10-08T09:24:17.048Z] Copying: 612/1024 [MB] (49 MBps) [2024-10-08T09:24:17.982Z] Copying: 658/1024 [MB] (45 MBps) [2024-10-08T09:24:18.921Z] Copying: 708/1024 [MB] (50 MBps) [2024-10-08T09:24:19.853Z] Copying: 757/1024 [MB] (49 MBps) [2024-10-08T09:24:20.786Z] Copying: 804/1024 [MB] (46 MBps) [2024-10-08T09:24:21.719Z] Copying: 849/1024 [MB] (44 MBps) [2024-10-08T09:24:23.092Z] Copying: 893/1024 [MB] (44 MBps) [2024-10-08T09:24:24.026Z] Copying: 939/1024 [MB] (46 MBps) [2024-10-08T09:24:24.593Z] Copying: 985/1024 [MB] (45 MBps) [2024-10-08T09:24:24.593Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-10-08 09:24:24.542358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.910 [2024-10-08 09:24:24.542425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:32.910 [2024-10-08 09:24:24.542438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:32.910 [2024-10-08 09:24:24.542445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.910 [2024-10-08 09:24:24.542465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:32.910 [2024-10-08 09:24:24.544694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.910 [2024-10-08 09:24:24.544726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:32.910 [2024-10-08 09:24:24.544735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.216 ms 00:18:32.910 [2024-10-08 09:24:24.544743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.910 [2024-10-08 09:24:24.546052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.910 [2024-10-08 09:24:24.546082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:32.910 [2024-10-08 09:24:24.546089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:18:32.910 [2024-10-08 09:24:24.546096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.910 [2024-10-08 09:24:24.556726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.910 [2024-10-08 09:24:24.556764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:32.910 [2024-10-08 09:24:24.556773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.617 ms 00:18:32.910 [2024-10-08 09:24:24.556779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.910 [2024-10-08 09:24:24.561385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.910 [2024-10-08 09:24:24.561414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:32.910 [2024-10-08 09:24:24.561423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.582 ms 00:18:32.910 [2024-10-08 09:24:24.561430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.910 [2024-10-08 09:24:24.579963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.910 [2024-10-08 09:24:24.579996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:32.910 [2024-10-08 09:24:24.580006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.489 ms 00:18:32.910 [2024-10-08 09:24:24.580013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.910 [2024-10-08 09:24:24.591396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.910 [2024-10-08 09:24:24.591427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:32.910 [2024-10-08 09:24:24.591442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.353 ms 00:18:32.910 [2024-10-08 09:24:24.591449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.910 [2024-10-08 09:24:24.591541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.910 [2024-10-08 09:24:24.591550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:32.910 [2024-10-08 09:24:24.591557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:32.910 [2024-10-08 09:24:24.591563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.169 [2024-10-08 09:24:24.609367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.169 [2024-10-08 09:24:24.609403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:33.169 [2024-10-08 09:24:24.609412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.793 ms 00:18:33.169 [2024-10-08 09:24:24.609418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.169 [2024-10-08 09:24:24.626766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.169 [2024-10-08 09:24:24.626797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:33.169 [2024-10-08 09:24:24.626806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.319 ms 00:18:33.169 [2024-10-08 09:24:24.626813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.169 [2024-10-08 09:24:24.643408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.169 [2024-10-08 09:24:24.643449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:33.169 [2024-10-08 09:24:24.643458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.566 ms 00:18:33.169 [2024-10-08 09:24:24.643464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.169 [2024-10-08 09:24:24.660107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.169 [2024-10-08 09:24:24.660137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:33.169 [2024-10-08 09:24:24.660146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.593 ms 00:18:33.169 [2024-10-08 09:24:24.660152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.169 [2024-10-08 09:24:24.660180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:33.169 [2024-10-08 09:24:24.660196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:33.169 [2024-10-08 09:24:24.660204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:33.169 [2024-10-08 09:24:24.660211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:33.169 [2024-10-08 09:24:24.660217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:33.169 [2024-10-08 09:24:24.660224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:33.169 [2024-10-08 09:24:24.660230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:33.169 [2024-10-08 09:24:24.660236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:33.169 [2024-10-08 09:24:24.660243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:33.169 [2024-10-08 09:24:24.660249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:33.169 [2024-10-08 09:24:24.660256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:33.170 [2024-10-08 09:24:24.660827] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:33.170 [2024-10-08 09:24:24.660834] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ebde3831-2620-4969-b6d1-8149682e8f6d 00:18:33.170 [2024-10-08 09:24:24.660841] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:33.170 [2024-10-08 09:24:24.660847] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:33.170 [2024-10-08 09:24:24.660852] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:33.171 [2024-10-08 09:24:24.660858] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:33.171 [2024-10-08 09:24:24.660864] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:33.171 [2024-10-08 09:24:24.660870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:33.171 [2024-10-08 09:24:24.660876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:33.171 [2024-10-08 09:24:24.660881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:33.171 [2024-10-08 09:24:24.660886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:33.171 [2024-10-08 09:24:24.660891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.171 [2024-10-08 09:24:24.660901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:33.171 [2024-10-08 09:24:24.660914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:18:33.171 [2024-10-08 09:24:24.660920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.670773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.171 [2024-10-08 09:24:24.670801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:33.171 [2024-10-08 09:24:24.670809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.841 ms 00:18:33.171 [2024-10-08 09:24:24.670816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.671108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.171 [2024-10-08 09:24:24.671122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:33.171 [2024-10-08 09:24:24.671129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:18:33.171 [2024-10-08 09:24:24.671135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.694338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.171 [2024-10-08 09:24:24.694378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.171 [2024-10-08 09:24:24.694395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.171 [2024-10-08 09:24:24.694402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.694466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.171 [2024-10-08 09:24:24.694474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.171 [2024-10-08 09:24:24.694481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.171 [2024-10-08 09:24:24.694487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.694558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.171 [2024-10-08 09:24:24.694567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.171 [2024-10-08 09:24:24.694573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.171 [2024-10-08 09:24:24.694580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.694593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.171 [2024-10-08 09:24:24.694604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.171 [2024-10-08 09:24:24.694611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.171 [2024-10-08 09:24:24.694616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.756771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.171 [2024-10-08 09:24:24.756827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.171 [2024-10-08 09:24:24.756840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.171 [2024-10-08 09:24:24.756847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.807487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.171 [2024-10-08 09:24:24.807544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.171 [2024-10-08 09:24:24.807555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.171 [2024-10-08 09:24:24.807562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.807619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.171 [2024-10-08 09:24:24.807628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.171 [2024-10-08 09:24:24.807635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.171 [2024-10-08 09:24:24.807641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.807689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.171 [2024-10-08 09:24:24.807697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.171 [2024-10-08 09:24:24.807708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.171 [2024-10-08 09:24:24.807714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.807791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.171 [2024-10-08 09:24:24.807800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.171 [2024-10-08 09:24:24.807807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.171 [2024-10-08 09:24:24.807814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.807838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.171 [2024-10-08 09:24:24.807847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:33.171 [2024-10-08 09:24:24.807856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.171 [2024-10-08 09:24:24.807862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.807899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.171 [2024-10-08 09:24:24.807906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.171 [2024-10-08 09:24:24.807912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.171 [2024-10-08 09:24:24.807919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.807961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.171 [2024-10-08 09:24:24.807969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.171 [2024-10-08 09:24:24.807977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.171 [2024-10-08 09:24:24.807983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.171 [2024-10-08 09:24:24.808091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 265.704 ms, result 0 00:18:35.072 00:18:35.072 00:18:35.072 09:24:26 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:35.072 [2024-10-08 09:24:26.706609] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:18:35.072 [2024-10-08 09:24:26.706938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75058 ] 00:18:35.329 [2024-10-08 09:24:26.844350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.587 [2024-10-08 09:24:27.029975] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.587 [2024-10-08 09:24:27.259682] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:35.587 [2024-10-08 09:24:27.259746] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:35.845 [2024-10-08 09:24:27.412761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.845 [2024-10-08 09:24:27.412820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:35.845 [2024-10-08 09:24:27.412832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:35.845 [2024-10-08 09:24:27.412839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.845 [2024-10-08 09:24:27.412882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.845 [2024-10-08 09:24:27.412890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.845 [2024-10-08 09:24:27.412897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:35.845 [2024-10-08 09:24:27.412903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.845 [2024-10-08 09:24:27.412919] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:35.845 [2024-10-08 09:24:27.413439] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:35.845 [2024-10-08 09:24:27.413460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.845 [2024-10-08 09:24:27.413468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.845 [2024-10-08 09:24:27.413475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:18:35.845 [2024-10-08 09:24:27.413481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.845 [2024-10-08 09:24:27.414750] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:35.845 [2024-10-08 09:24:27.424912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.845 [2024-10-08 09:24:27.424945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:35.845 [2024-10-08 09:24:27.424956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.163 ms 00:18:35.845 [2024-10-08 09:24:27.424963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.845 [2024-10-08 09:24:27.425014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.845 [2024-10-08 09:24:27.425021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:35.845 [2024-10-08 09:24:27.425028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:35.845 [2024-10-08 09:24:27.425034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.845 [2024-10-08 09:24:27.431170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.845 [2024-10-08 09:24:27.431199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.845 [2024-10-08 09:24:27.431207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.089 ms 00:18:35.845 [2024-10-08 09:24:27.431214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.845 [2024-10-08 09:24:27.431274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.845 [2024-10-08 09:24:27.431283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.845 [2024-10-08 09:24:27.431291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:35.845 [2024-10-08 09:24:27.431297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.845 [2024-10-08 09:24:27.431338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.845 [2024-10-08 09:24:27.431346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:35.845 [2024-10-08 09:24:27.431353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:35.845 [2024-10-08 09:24:27.431359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.845 [2024-10-08 09:24:27.431377] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.845 [2024-10-08 09:24:27.434597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.845 [2024-10-08 09:24:27.434623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.845 [2024-10-08 09:24:27.434631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:18:35.845 [2024-10-08 09:24:27.434638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.845 [2024-10-08 09:24:27.434663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.845 [2024-10-08 09:24:27.434671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:35.845 [2024-10-08 09:24:27.434678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:35.845 [2024-10-08 09:24:27.434684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.845 [2024-10-08 09:24:27.434704] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:35.845 [2024-10-08 09:24:27.434721] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:35.845 [2024-10-08 09:24:27.434752] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:35.845 [2024-10-08 09:24:27.434764] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:35.845 [2024-10-08 09:24:27.434847] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:35.845 [2024-10-08 09:24:27.434858] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:35.845 [2024-10-08 09:24:27.434868] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:35.845 [2024-10-08 09:24:27.434880] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:35.845 [2024-10-08 09:24:27.434888] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:35.846 [2024-10-08 09:24:27.434894] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:35.846 [2024-10-08 09:24:27.434901] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:35.846 [2024-10-08 09:24:27.434908] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:35.846 [2024-10-08 09:24:27.434914] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:35.846 [2024-10-08 09:24:27.434920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.846 [2024-10-08 09:24:27.434927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:35.846 [2024-10-08 09:24:27.434934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:18:35.846 [2024-10-08 09:24:27.434939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.846 [2024-10-08 09:24:27.435003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.846 [2024-10-08 09:24:27.435012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:35.846 [2024-10-08 09:24:27.435019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:35.846 [2024-10-08 09:24:27.435024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.846 [2024-10-08 09:24:27.435110] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:35.846 [2024-10-08 09:24:27.435124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:35.846 [2024-10-08 09:24:27.435131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.846 [2024-10-08 09:24:27.435138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:35.846 [2024-10-08 09:24:27.435151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:35.846 [2024-10-08 09:24:27.435164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:35.846 [2024-10-08 09:24:27.435170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.846 [2024-10-08 09:24:27.435182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:35.846 [2024-10-08 09:24:27.435187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:35.846 [2024-10-08 09:24:27.435193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.846 [2024-10-08 09:24:27.435204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:35.846 [2024-10-08 09:24:27.435210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:35.846 [2024-10-08 09:24:27.435216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:35.846 [2024-10-08 09:24:27.435227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:35.846 [2024-10-08 09:24:27.435232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:35.846 [2024-10-08 09:24:27.435242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.846 [2024-10-08 09:24:27.435252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:35.846 [2024-10-08 09:24:27.435257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.846 [2024-10-08 09:24:27.435267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:35.846 [2024-10-08 09:24:27.435272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.846 [2024-10-08 09:24:27.435283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:35.846 [2024-10-08 09:24:27.435288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.846 [2024-10-08 09:24:27.435298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:35.846 [2024-10-08 09:24:27.435304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.846 [2024-10-08 09:24:27.435313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:35.846 [2024-10-08 09:24:27.435318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:35.846 [2024-10-08 09:24:27.435323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.846 [2024-10-08 09:24:27.435328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:35.846 [2024-10-08 09:24:27.435334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:35.846 [2024-10-08 09:24:27.435339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:35.846 [2024-10-08 09:24:27.435349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:35.846 [2024-10-08 09:24:27.435354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435359] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:35.846 [2024-10-08 09:24:27.435365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:35.846 [2024-10-08 09:24:27.435372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.846 [2024-10-08 09:24:27.435378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.846 [2024-10-08 09:24:27.435416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:35.846 [2024-10-08 09:24:27.435423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:35.846 [2024-10-08 09:24:27.435429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:35.846 [2024-10-08 09:24:27.435435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:35.846 [2024-10-08 09:24:27.435440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:35.846 [2024-10-08 09:24:27.435446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:35.846 [2024-10-08 09:24:27.435453] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:35.846 [2024-10-08 09:24:27.435461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.846 [2024-10-08 09:24:27.435468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:35.846 [2024-10-08 09:24:27.435474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:35.846 [2024-10-08 09:24:27.435480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:35.846 [2024-10-08 09:24:27.435485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:35.846 [2024-10-08 09:24:27.435491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:35.846 [2024-10-08 09:24:27.435497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:35.846 [2024-10-08 09:24:27.435503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:35.846 [2024-10-08 09:24:27.435509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:35.846 [2024-10-08 09:24:27.435515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:35.846 [2024-10-08 09:24:27.435521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:35.846 [2024-10-08 09:24:27.435526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:35.846 [2024-10-08 09:24:27.435532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:35.846 [2024-10-08 09:24:27.435537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:35.846 [2024-10-08 09:24:27.435543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:35.846 [2024-10-08 09:24:27.435549] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:35.846 [2024-10-08 09:24:27.435556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.846 [2024-10-08 09:24:27.435563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:35.846 [2024-10-08 09:24:27.435569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:35.846 [2024-10-08 09:24:27.435575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:35.846 [2024-10-08 09:24:27.435581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:35.846 [2024-10-08 09:24:27.435586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.846 [2024-10-08 09:24:27.435592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:35.846 [2024-10-08 09:24:27.435598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:18:35.846 [2024-10-08 09:24:27.435604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.846 [2024-10-08 09:24:27.472067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.846 [2024-10-08 09:24:27.472115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.846 [2024-10-08 09:24:27.472130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.411 ms 00:18:35.846 [2024-10-08 09:24:27.472139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.846 [2024-10-08 09:24:27.472314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.846 [2024-10-08 09:24:27.472325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:35.846 [2024-10-08 09:24:27.472334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:35.846 [2024-10-08 09:24:27.472342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.846 [2024-10-08 09:24:27.498837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.846 [2024-10-08 09:24:27.498872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.846 [2024-10-08 09:24:27.498885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.412 ms 00:18:35.846 [2024-10-08 09:24:27.498892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.846 [2024-10-08 09:24:27.498929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.847 [2024-10-08 09:24:27.498936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.847 [2024-10-08 09:24:27.498943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:35.847 [2024-10-08 09:24:27.498949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.847 [2024-10-08 09:24:27.499363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.847 [2024-10-08 09:24:27.499400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.847 [2024-10-08 09:24:27.499408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:18:35.847 [2024-10-08 09:24:27.499418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.847 [2024-10-08 09:24:27.499531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.847 [2024-10-08 09:24:27.499547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.847 [2024-10-08 09:24:27.499554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:35.847 [2024-10-08 09:24:27.499561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.847 [2024-10-08 09:24:27.510567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.847 [2024-10-08 09:24:27.510593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.847 [2024-10-08 09:24:27.510602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.988 ms 00:18:35.847 [2024-10-08 09:24:27.510609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.847 [2024-10-08 09:24:27.520815] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:35.847 [2024-10-08 09:24:27.520843] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:35.847 [2024-10-08 09:24:27.520853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.847 [2024-10-08 09:24:27.520860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:35.847 [2024-10-08 09:24:27.520868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.140 ms 00:18:35.847 [2024-10-08 09:24:27.520874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.539612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.105 [2024-10-08 09:24:27.539642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:36.105 [2024-10-08 09:24:27.539652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.698 ms 00:18:36.105 [2024-10-08 09:24:27.539659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.548730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.105 [2024-10-08 09:24:27.548757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:36.105 [2024-10-08 09:24:27.548765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.033 ms 00:18:36.105 [2024-10-08 09:24:27.548771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.557522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.105 [2024-10-08 09:24:27.557548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:36.105 [2024-10-08 09:24:27.557556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.711 ms 00:18:36.105 [2024-10-08 09:24:27.557561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.558049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.105 [2024-10-08 09:24:27.558070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:36.105 [2024-10-08 09:24:27.558077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:18:36.105 [2024-10-08 09:24:27.558083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.605835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.105 [2024-10-08 09:24:27.605894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:36.105 [2024-10-08 09:24:27.605906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.734 ms 00:18:36.105 [2024-10-08 09:24:27.605914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.614667] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:36.105 [2024-10-08 09:24:27.617396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.105 [2024-10-08 09:24:27.617424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:36.105 [2024-10-08 09:24:27.617436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.415 ms 00:18:36.105 [2024-10-08 09:24:27.617447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.617542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.105 [2024-10-08 09:24:27.617551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:36.105 [2024-10-08 09:24:27.617559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:36.105 [2024-10-08 09:24:27.617567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.617648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.105 [2024-10-08 09:24:27.617666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:36.105 [2024-10-08 09:24:27.617674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:36.105 [2024-10-08 09:24:27.617680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.617700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.105 [2024-10-08 09:24:27.617708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:36.105 [2024-10-08 09:24:27.617715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:36.105 [2024-10-08 09:24:27.617721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.617750] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:36.105 [2024-10-08 09:24:27.617760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.105 [2024-10-08 09:24:27.617767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:36.105 [2024-10-08 09:24:27.617775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:36.105 [2024-10-08 09:24:27.617784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.636442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.105 [2024-10-08 09:24:27.636486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:36.105 [2024-10-08 09:24:27.636496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.642 ms 00:18:36.105 [2024-10-08 09:24:27.636504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.636579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.105 [2024-10-08 09:24:27.636587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:36.105 [2024-10-08 09:24:27.636595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:36.105 [2024-10-08 09:24:27.636601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.105 [2024-10-08 09:24:27.637968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.795 ms, result 0 00:18:37.479  [2024-10-08T09:24:30.095Z] Copying: 49/1024 [MB] (49 MBps) [2024-10-08T09:24:31.028Z] Copying: 97/1024 [MB] (48 MBps) [2024-10-08T09:24:31.961Z] Copying: 145/1024 [MB] (47 MBps) [2024-10-08T09:24:32.893Z] Copying: 195/1024 [MB] (49 MBps) [2024-10-08T09:24:33.826Z] Copying: 244/1024 [MB] (49 MBps) [2024-10-08T09:24:35.199Z] Copying: 293/1024 [MB] (48 MBps) [2024-10-08T09:24:36.132Z] Copying: 341/1024 [MB] (48 MBps) [2024-10-08T09:24:37.065Z] Copying: 394/1024 [MB] (52 MBps) [2024-10-08T09:24:37.999Z] Copying: 444/1024 [MB] (50 MBps) [2024-10-08T09:24:38.970Z] Copying: 491/1024 [MB] (47 MBps) [2024-10-08T09:24:39.905Z] Copying: 545/1024 [MB] (53 MBps) [2024-10-08T09:24:40.839Z] Copying: 596/1024 [MB] (51 MBps) [2024-10-08T09:24:42.212Z] Copying: 645/1024 [MB] (49 MBps) [2024-10-08T09:24:42.777Z] Copying: 694/1024 [MB] (48 MBps) [2024-10-08T09:24:44.152Z] Copying: 743/1024 [MB] (49 MBps) [2024-10-08T09:24:45.085Z] Copying: 793/1024 [MB] (49 MBps) [2024-10-08T09:24:46.018Z] Copying: 844/1024 [MB] (50 MBps) [2024-10-08T09:24:46.951Z] Copying: 893/1024 [MB] (49 MBps) [2024-10-08T09:24:47.884Z] Copying: 945/1024 [MB] (51 MBps) [2024-10-08T09:24:48.450Z] Copying: 995/1024 [MB] (50 MBps) [2024-10-08T09:24:48.709Z] Copying: 1024/1024 [MB] (average 49 MBps)[2024-10-08 09:24:48.593557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.026 [2024-10-08 09:24:48.593652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:57.026 [2024-10-08 09:24:48.593677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:57.026 [2024-10-08 09:24:48.593692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.026 [2024-10-08 09:24:48.593735] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:57.026 [2024-10-08 09:24:48.597281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.026 [2024-10-08 09:24:48.597315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:57.026 [2024-10-08 09:24:48.597326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.521 ms 00:18:57.026 [2024-10-08 09:24:48.597335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.026 [2024-10-08 09:24:48.597576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.026 [2024-10-08 09:24:48.597594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:57.026 [2024-10-08 09:24:48.597603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:18:57.026 [2024-10-08 09:24:48.597611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.026 [2024-10-08 09:24:48.601062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.026 [2024-10-08 09:24:48.601085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:57.026 [2024-10-08 09:24:48.601095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.434 ms 00:18:57.026 [2024-10-08 09:24:48.601103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.026 [2024-10-08 09:24:48.607341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.026 [2024-10-08 09:24:48.607371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:57.026 [2024-10-08 09:24:48.607382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.222 ms 00:18:57.026 [2024-10-08 09:24:48.607413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.026 [2024-10-08 09:24:48.633529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.026 [2024-10-08 09:24:48.633570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:57.026 [2024-10-08 09:24:48.633582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.050 ms 00:18:57.026 [2024-10-08 09:24:48.633591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.026 [2024-10-08 09:24:48.647816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.026 [2024-10-08 09:24:48.647857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:57.026 [2024-10-08 09:24:48.647869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.200 ms 00:18:57.026 [2024-10-08 09:24:48.647878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.026 [2024-10-08 09:24:48.648009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.026 [2024-10-08 09:24:48.648020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:57.026 [2024-10-08 09:24:48.648029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:57.026 [2024-10-08 09:24:48.648037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.026 [2024-10-08 09:24:48.671568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.026 [2024-10-08 09:24:48.671601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:57.026 [2024-10-08 09:24:48.671612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.514 ms 00:18:57.026 [2024-10-08 09:24:48.671619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.026 [2024-10-08 09:24:48.694447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.026 [2024-10-08 09:24:48.694480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:57.026 [2024-10-08 09:24:48.694490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.810 ms 00:18:57.026 [2024-10-08 09:24:48.694498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.285 [2024-10-08 09:24:48.716631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.285 [2024-10-08 09:24:48.716663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:57.285 [2024-10-08 09:24:48.716674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.116 ms 00:18:57.285 [2024-10-08 09:24:48.716681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.285 [2024-10-08 09:24:48.738903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.285 [2024-10-08 09:24:48.738932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:57.285 [2024-10-08 09:24:48.738942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.180 ms 00:18:57.285 [2024-10-08 09:24:48.738949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.285 [2024-10-08 09:24:48.738967] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:57.285 [2024-10-08 09:24:48.738982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.738992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:57.285 [2024-10-08 09:24:48.739142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:57.286 [2024-10-08 09:24:48.739784] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:57.286 [2024-10-08 09:24:48.739792] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ebde3831-2620-4969-b6d1-8149682e8f6d 00:18:57.286 [2024-10-08 09:24:48.739801] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:57.286 [2024-10-08 09:24:48.739809] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:57.286 [2024-10-08 09:24:48.739816] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:57.286 [2024-10-08 09:24:48.739825] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:57.286 [2024-10-08 09:24:48.739832] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:57.286 [2024-10-08 09:24:48.739840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:57.286 [2024-10-08 09:24:48.739852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:57.286 [2024-10-08 09:24:48.739859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:57.286 [2024-10-08 09:24:48.739866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:57.286 [2024-10-08 09:24:48.739873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.286 [2024-10-08 09:24:48.739888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:57.286 [2024-10-08 09:24:48.739897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:18:57.286 [2024-10-08 09:24:48.739904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.286 [2024-10-08 09:24:48.752548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.286 [2024-10-08 09:24:48.752743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:57.286 [2024-10-08 09:24:48.752759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.615 ms 00:18:57.287 [2024-10-08 09:24:48.752774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.753121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.287 [2024-10-08 09:24:48.753131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:57.287 [2024-10-08 09:24:48.753140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:18:57.287 [2024-10-08 09:24:48.753147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.782889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.287 [2024-10-08 09:24:48.782923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.287 [2024-10-08 09:24:48.782933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.287 [2024-10-08 09:24:48.782944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.783000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.287 [2024-10-08 09:24:48.783009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.287 [2024-10-08 09:24:48.783017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.287 [2024-10-08 09:24:48.783024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.783081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.287 [2024-10-08 09:24:48.783091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.287 [2024-10-08 09:24:48.783100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.287 [2024-10-08 09:24:48.783107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.783126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.287 [2024-10-08 09:24:48.783134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.287 [2024-10-08 09:24:48.783143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.287 [2024-10-08 09:24:48.783150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.864647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.287 [2024-10-08 09:24:48.864699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.287 [2024-10-08 09:24:48.864710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.287 [2024-10-08 09:24:48.864722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.930222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.287 [2024-10-08 09:24:48.930276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.287 [2024-10-08 09:24:48.930288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.287 [2024-10-08 09:24:48.930297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.930376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.287 [2024-10-08 09:24:48.930386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:57.287 [2024-10-08 09:24:48.930413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.287 [2024-10-08 09:24:48.930422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.930460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.287 [2024-10-08 09:24:48.930475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:57.287 [2024-10-08 09:24:48.930483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.287 [2024-10-08 09:24:48.930491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.930582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.287 [2024-10-08 09:24:48.930593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:57.287 [2024-10-08 09:24:48.930602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.287 [2024-10-08 09:24:48.930610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.930638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.287 [2024-10-08 09:24:48.930650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:57.287 [2024-10-08 09:24:48.930659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.287 [2024-10-08 09:24:48.930666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.930707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.287 [2024-10-08 09:24:48.930716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:57.287 [2024-10-08 09:24:48.930724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.287 [2024-10-08 09:24:48.930731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.930776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.287 [2024-10-08 09:24:48.930788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:57.287 [2024-10-08 09:24:48.930797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.287 [2024-10-08 09:24:48.930804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.287 [2024-10-08 09:24:48.930925] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.354 ms, result 0 00:18:58.222 00:18:58.222 00:18:58.222 09:24:49 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:00.791 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:00.791 09:24:51 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:00.791 [2024-10-08 09:24:52.002556] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:19:00.791 [2024-10-08 09:24:52.002675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75328 ] 00:19:00.791 [2024-10-08 09:24:52.152618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.791 [2024-10-08 09:24:52.359010] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.048 [2024-10-08 09:24:52.631107] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:01.048 [2024-10-08 09:24:52.631185] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:01.307 [2024-10-08 09:24:52.786010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.307 [2024-10-08 09:24:52.786237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:01.307 [2024-10-08 09:24:52.786258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:01.307 [2024-10-08 09:24:52.786268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.307 [2024-10-08 09:24:52.786329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.307 [2024-10-08 09:24:52.786340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.307 [2024-10-08 09:24:52.786350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:01.307 [2024-10-08 09:24:52.786357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.307 [2024-10-08 09:24:52.786377] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:01.307 [2024-10-08 09:24:52.787098] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:01.307 [2024-10-08 09:24:52.787120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.307 [2024-10-08 09:24:52.787129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.307 [2024-10-08 09:24:52.787137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:19:01.307 [2024-10-08 09:24:52.787145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.307 [2024-10-08 09:24:52.788540] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:01.307 [2024-10-08 09:24:52.801470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.307 [2024-10-08 09:24:52.801507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:01.307 [2024-10-08 09:24:52.801521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.931 ms 00:19:01.307 [2024-10-08 09:24:52.801529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.307 [2024-10-08 09:24:52.801585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.307 [2024-10-08 09:24:52.801596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:01.307 [2024-10-08 09:24:52.801605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:01.307 [2024-10-08 09:24:52.801613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.307 [2024-10-08 09:24:52.808205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.307 [2024-10-08 09:24:52.808238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.307 [2024-10-08 09:24:52.808249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.542 ms 00:19:01.307 [2024-10-08 09:24:52.808258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.307 [2024-10-08 09:24:52.808333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.307 [2024-10-08 09:24:52.808344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.307 [2024-10-08 09:24:52.808353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:01.307 [2024-10-08 09:24:52.808361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.307 [2024-10-08 09:24:52.808427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.307 [2024-10-08 09:24:52.808548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:01.307 [2024-10-08 09:24:52.808560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:01.307 [2024-10-08 09:24:52.808568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.307 [2024-10-08 09:24:52.808590] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:01.307 [2024-10-08 09:24:52.812265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.307 [2024-10-08 09:24:52.812430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.307 [2024-10-08 09:24:52.812449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.681 ms 00:19:01.307 [2024-10-08 09:24:52.812456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.307 [2024-10-08 09:24:52.812491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.307 [2024-10-08 09:24:52.812502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:01.307 [2024-10-08 09:24:52.812510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:01.307 [2024-10-08 09:24:52.812518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.307 [2024-10-08 09:24:52.812551] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:01.307 [2024-10-08 09:24:52.812573] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:01.307 [2024-10-08 09:24:52.812610] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:01.307 [2024-10-08 09:24:52.812626] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:01.307 [2024-10-08 09:24:52.812735] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:01.307 [2024-10-08 09:24:52.812747] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:01.307 [2024-10-08 09:24:52.812758] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:01.307 [2024-10-08 09:24:52.812773] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:01.307 [2024-10-08 09:24:52.812783] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:01.307 [2024-10-08 09:24:52.812791] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:01.307 [2024-10-08 09:24:52.812799] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:01.307 [2024-10-08 09:24:52.812807] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:01.307 [2024-10-08 09:24:52.812815] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:01.307 [2024-10-08 09:24:52.812824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.307 [2024-10-08 09:24:52.812832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:01.307 [2024-10-08 09:24:52.812840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:19:01.307 [2024-10-08 09:24:52.812847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.307 [2024-10-08 09:24:52.812930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.308 [2024-10-08 09:24:52.812941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:01.308 [2024-10-08 09:24:52.812950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:01.308 [2024-10-08 09:24:52.812957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.308 [2024-10-08 09:24:52.813060] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:01.308 [2024-10-08 09:24:52.813071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:01.308 [2024-10-08 09:24:52.813080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:01.308 [2024-10-08 09:24:52.813088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:01.308 [2024-10-08 09:24:52.813102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:01.308 [2024-10-08 09:24:52.813116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:01.308 [2024-10-08 09:24:52.813125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:01.308 [2024-10-08 09:24:52.813139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:01.308 [2024-10-08 09:24:52.813146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:01.308 [2024-10-08 09:24:52.813153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:01.308 [2024-10-08 09:24:52.813166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:01.308 [2024-10-08 09:24:52.813174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:01.308 [2024-10-08 09:24:52.813181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:01.308 [2024-10-08 09:24:52.813195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:01.308 [2024-10-08 09:24:52.813204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:01.308 [2024-10-08 09:24:52.813218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.308 [2024-10-08 09:24:52.813232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:01.308 [2024-10-08 09:24:52.813239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.308 [2024-10-08 09:24:52.813252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:01.308 [2024-10-08 09:24:52.813259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.308 [2024-10-08 09:24:52.813272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:01.308 [2024-10-08 09:24:52.813279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:01.308 [2024-10-08 09:24:52.813291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:01.308 [2024-10-08 09:24:52.813298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:01.308 [2024-10-08 09:24:52.813312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:01.308 [2024-10-08 09:24:52.813318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:01.308 [2024-10-08 09:24:52.813324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:01.308 [2024-10-08 09:24:52.813331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:01.308 [2024-10-08 09:24:52.813338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:01.308 [2024-10-08 09:24:52.813344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:01.308 [2024-10-08 09:24:52.813360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:01.308 [2024-10-08 09:24:52.813367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813374] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:01.308 [2024-10-08 09:24:52.813381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:01.308 [2024-10-08 09:24:52.813404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:01.308 [2024-10-08 09:24:52.813412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.308 [2024-10-08 09:24:52.813420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:01.308 [2024-10-08 09:24:52.813429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:01.308 [2024-10-08 09:24:52.813436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:01.308 [2024-10-08 09:24:52.813444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:01.308 [2024-10-08 09:24:52.813451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:01.308 [2024-10-08 09:24:52.813458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:01.308 [2024-10-08 09:24:52.813466] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:01.308 [2024-10-08 09:24:52.813475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:01.308 [2024-10-08 09:24:52.813485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:01.308 [2024-10-08 09:24:52.813493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:01.308 [2024-10-08 09:24:52.813500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:01.308 [2024-10-08 09:24:52.813508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:01.308 [2024-10-08 09:24:52.813515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:01.308 [2024-10-08 09:24:52.813523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:01.308 [2024-10-08 09:24:52.813530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:01.308 [2024-10-08 09:24:52.813538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:01.308 [2024-10-08 09:24:52.813546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:01.308 [2024-10-08 09:24:52.813553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:01.308 [2024-10-08 09:24:52.813560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:01.308 [2024-10-08 09:24:52.813568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:01.308 [2024-10-08 09:24:52.813576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:01.308 [2024-10-08 09:24:52.813583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:01.308 [2024-10-08 09:24:52.813591] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:01.308 [2024-10-08 09:24:52.813599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:01.308 [2024-10-08 09:24:52.813608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:01.308 [2024-10-08 09:24:52.813616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:01.308 [2024-10-08 09:24:52.813624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:01.308 [2024-10-08 09:24:52.813632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:01.308 [2024-10-08 09:24:52.813639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.308 [2024-10-08 09:24:52.813647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:01.308 [2024-10-08 09:24:52.813655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:19:01.308 [2024-10-08 09:24:52.813662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.308 [2024-10-08 09:24:52.863169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.308 [2024-10-08 09:24:52.863220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.308 [2024-10-08 09:24:52.863237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.460 ms 00:19:01.308 [2024-10-08 09:24:52.863248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.308 [2024-10-08 09:24:52.863366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.308 [2024-10-08 09:24:52.863378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:01.308 [2024-10-08 09:24:52.863428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:01.308 [2024-10-08 09:24:52.863439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.308 [2024-10-08 09:24:52.896109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.308 [2024-10-08 09:24:52.896144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:01.308 [2024-10-08 09:24:52.896158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.594 ms 00:19:01.308 [2024-10-08 09:24:52.896167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.308 [2024-10-08 09:24:52.896199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.308 [2024-10-08 09:24:52.896208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:01.308 [2024-10-08 09:24:52.896217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:01.308 [2024-10-08 09:24:52.896224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.308 [2024-10-08 09:24:52.896713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.308 [2024-10-08 09:24:52.896730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:01.308 [2024-10-08 09:24:52.896741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:19:01.308 [2024-10-08 09:24:52.896755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.308 [2024-10-08 09:24:52.896886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.308 [2024-10-08 09:24:52.896896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:01.308 [2024-10-08 09:24:52.896905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:19:01.308 [2024-10-08 09:24:52.896913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.309 [2024-10-08 09:24:52.910363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.309 [2024-10-08 09:24:52.910407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:01.309 [2024-10-08 09:24:52.910417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.430 ms 00:19:01.309 [2024-10-08 09:24:52.910426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.309 [2024-10-08 09:24:52.923307] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:01.309 [2024-10-08 09:24:52.923339] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:01.309 [2024-10-08 09:24:52.923351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.309 [2024-10-08 09:24:52.923359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:01.309 [2024-10-08 09:24:52.923368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.832 ms 00:19:01.309 [2024-10-08 09:24:52.923376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.309 [2024-10-08 09:24:52.948440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.309 [2024-10-08 09:24:52.948497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:01.309 [2024-10-08 09:24:52.948511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.999 ms 00:19:01.309 [2024-10-08 09:24:52.948520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.309 [2024-10-08 09:24:52.960233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.309 [2024-10-08 09:24:52.960267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:01.309 [2024-10-08 09:24:52.960277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.659 ms 00:19:01.309 [2024-10-08 09:24:52.960285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.309 [2024-10-08 09:24:52.971450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.309 [2024-10-08 09:24:52.971483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:01.309 [2024-10-08 09:24:52.971494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.122 ms 00:19:01.309 [2024-10-08 09:24:52.971501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.309 [2024-10-08 09:24:52.972128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.309 [2024-10-08 09:24:52.972142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:01.309 [2024-10-08 09:24:52.972151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:19:01.309 [2024-10-08 09:24:52.972159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.567 [2024-10-08 09:24:53.031508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.567 [2024-10-08 09:24:53.031576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:01.567 [2024-10-08 09:24:53.031593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.329 ms 00:19:01.567 [2024-10-08 09:24:53.031602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.567 [2024-10-08 09:24:53.042498] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:01.567 [2024-10-08 09:24:53.045704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.567 [2024-10-08 09:24:53.045733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:01.567 [2024-10-08 09:24:53.045746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.035 ms 00:19:01.567 [2024-10-08 09:24:53.045759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.567 [2024-10-08 09:24:53.045875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.567 [2024-10-08 09:24:53.045887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:01.567 [2024-10-08 09:24:53.045898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:01.567 [2024-10-08 09:24:53.045906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.567 [2024-10-08 09:24:53.045981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.567 [2024-10-08 09:24:53.045992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:01.567 [2024-10-08 09:24:53.046001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:01.567 [2024-10-08 09:24:53.046009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.567 [2024-10-08 09:24:53.046032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.567 [2024-10-08 09:24:53.046041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:01.567 [2024-10-08 09:24:53.046050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:01.567 [2024-10-08 09:24:53.046057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.567 [2024-10-08 09:24:53.046089] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:01.567 [2024-10-08 09:24:53.046100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.567 [2024-10-08 09:24:53.046108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:01.567 [2024-10-08 09:24:53.046116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:01.567 [2024-10-08 09:24:53.046127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.567 [2024-10-08 09:24:53.069956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.567 [2024-10-08 09:24:53.069993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:01.567 [2024-10-08 09:24:53.070005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.811 ms 00:19:01.567 [2024-10-08 09:24:53.070013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.567 [2024-10-08 09:24:53.070089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.567 [2024-10-08 09:24:53.070101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:01.567 [2024-10-08 09:24:53.070109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:01.567 [2024-10-08 09:24:53.070118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.567 [2024-10-08 09:24:53.071273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.783 ms, result 0 00:19:02.500  [2024-10-08T09:24:55.116Z] Copying: 46/1024 [MB] (46 MBps) [2024-10-08T09:24:56.489Z] Copying: 96/1024 [MB] (49 MBps) [2024-10-08T09:24:57.423Z] Copying: 142/1024 [MB] (46 MBps) [2024-10-08T09:24:58.357Z] Copying: 188/1024 [MB] (45 MBps) [2024-10-08T09:24:59.291Z] Copying: 233/1024 [MB] (45 MBps) [2024-10-08T09:25:00.224Z] Copying: 285/1024 [MB] (52 MBps) [2024-10-08T09:25:01.157Z] Copying: 333/1024 [MB] (48 MBps) [2024-10-08T09:25:02.090Z] Copying: 379/1024 [MB] (45 MBps) [2024-10-08T09:25:03.464Z] Copying: 425/1024 [MB] (46 MBps) [2024-10-08T09:25:04.430Z] Copying: 475/1024 [MB] (49 MBps) [2024-10-08T09:25:05.363Z] Copying: 518/1024 [MB] (43 MBps) [2024-10-08T09:25:06.295Z] Copying: 564/1024 [MB] (45 MBps) [2024-10-08T09:25:07.229Z] Copying: 616/1024 [MB] (52 MBps) [2024-10-08T09:25:08.163Z] Copying: 663/1024 [MB] (46 MBps) [2024-10-08T09:25:09.098Z] Copying: 704/1024 [MB] (41 MBps) [2024-10-08T09:25:10.474Z] Copying: 756/1024 [MB] (51 MBps) [2024-10-08T09:25:11.409Z] Copying: 805/1024 [MB] (49 MBps) [2024-10-08T09:25:12.344Z] Copying: 849/1024 [MB] (43 MBps) [2024-10-08T09:25:13.282Z] Copying: 892/1024 [MB] (43 MBps) [2024-10-08T09:25:14.221Z] Copying: 938/1024 [MB] (46 MBps) [2024-10-08T09:25:15.156Z] Copying: 967/1024 [MB] (29 MBps) [2024-10-08T09:25:16.091Z] Copying: 1014/1024 [MB] (47 MBps) [2024-10-08T09:25:16.349Z] Copying: 1048428/1048576 [kB] (9092 kBps) [2024-10-08T09:25:16.349Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-10-08 09:25:16.244353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.666 [2024-10-08 09:25:16.244428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:24.666 [2024-10-08 09:25:16.244444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:24.666 [2024-10-08 09:25:16.244461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.666 [2024-10-08 09:25:16.247604] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:24.666 [2024-10-08 09:25:16.253159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.666 [2024-10-08 09:25:16.253327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:24.666 [2024-10-08 09:25:16.253346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.516 ms 00:19:24.666 [2024-10-08 09:25:16.253361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.666 [2024-10-08 09:25:16.264205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.666 [2024-10-08 09:25:16.264324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:24.666 [2024-10-08 09:25:16.264399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.899 ms 00:19:24.666 [2024-10-08 09:25:16.264426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.666 [2024-10-08 09:25:16.282626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.666 [2024-10-08 09:25:16.282734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:24.666 [2024-10-08 09:25:16.282791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.171 ms 00:19:24.666 [2024-10-08 09:25:16.282814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.666 [2024-10-08 09:25:16.288985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.666 [2024-10-08 09:25:16.289083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:24.666 [2024-10-08 09:25:16.289145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.124 ms 00:19:24.666 [2024-10-08 09:25:16.289168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.666 [2024-10-08 09:25:16.313370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.666 [2024-10-08 09:25:16.313506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:24.666 [2024-10-08 09:25:16.313560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.139 ms 00:19:24.666 [2024-10-08 09:25:16.313609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.666 [2024-10-08 09:25:16.327905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.666 [2024-10-08 09:25:16.328023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:24.666 [2024-10-08 09:25:16.328077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.252 ms 00:19:24.666 [2024-10-08 09:25:16.328100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.926 [2024-10-08 09:25:16.383327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.926 [2024-10-08 09:25:16.383513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:24.926 [2024-10-08 09:25:16.383586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.180 ms 00:19:24.926 [2024-10-08 09:25:16.383631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.926 [2024-10-08 09:25:16.408317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.926 [2024-10-08 09:25:16.408463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:24.926 [2024-10-08 09:25:16.408516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.651 ms 00:19:24.926 [2024-10-08 09:25:16.408539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.926 [2024-10-08 09:25:16.431723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.926 [2024-10-08 09:25:16.431827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:24.926 [2024-10-08 09:25:16.431874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.144 ms 00:19:24.926 [2024-10-08 09:25:16.431897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.926 [2024-10-08 09:25:16.454190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.926 [2024-10-08 09:25:16.454305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:24.926 [2024-10-08 09:25:16.454418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.254 ms 00:19:24.926 [2024-10-08 09:25:16.454442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.926 [2024-10-08 09:25:16.477035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.926 [2024-10-08 09:25:16.477185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:24.926 [2024-10-08 09:25:16.477242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.240 ms 00:19:24.926 [2024-10-08 09:25:16.477265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.926 [2024-10-08 09:25:16.477364] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:24.926 [2024-10-08 09:25:16.477429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120320 / 261120 wr_cnt: 1 state: open 00:19:24.926 [2024-10-08 09:25:16.477505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.477539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.477568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.477629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.477664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.477694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.477756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.477787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.477816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.477871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.477902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.477960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.477991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.478990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.479020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:24.926 [2024-10-08 09:25:16.479072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:24.927 [2024-10-08 09:25:16.479751] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:24.927 [2024-10-08 09:25:16.479759] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ebde3831-2620-4969-b6d1-8149682e8f6d 00:19:24.927 [2024-10-08 09:25:16.479772] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120320 00:19:24.927 [2024-10-08 09:25:16.479780] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121280 00:19:24.927 [2024-10-08 09:25:16.479786] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120320 00:19:24.927 [2024-10-08 09:25:16.479795] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:19:24.927 [2024-10-08 09:25:16.479803] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:24.927 [2024-10-08 09:25:16.479811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:24.927 [2024-10-08 09:25:16.479819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:24.927 [2024-10-08 09:25:16.479826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:24.927 [2024-10-08 09:25:16.479832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:24.927 [2024-10-08 09:25:16.479840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.927 [2024-10-08 09:25:16.479855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:24.927 [2024-10-08 09:25:16.479863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:19:24.927 [2024-10-08 09:25:16.479871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.927 [2024-10-08 09:25:16.492680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.927 [2024-10-08 09:25:16.492710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:24.927 [2024-10-08 09:25:16.492720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.790 ms 00:19:24.927 [2024-10-08 09:25:16.492728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.927 [2024-10-08 09:25:16.493094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.927 [2024-10-08 09:25:16.493110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:24.927 [2024-10-08 09:25:16.493119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:19:24.927 [2024-10-08 09:25:16.493131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.927 [2024-10-08 09:25:16.522569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.927 [2024-10-08 09:25:16.522601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.927 [2024-10-08 09:25:16.522612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.927 [2024-10-08 09:25:16.522620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.927 [2024-10-08 09:25:16.522678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.927 [2024-10-08 09:25:16.522686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.927 [2024-10-08 09:25:16.522695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.927 [2024-10-08 09:25:16.522706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.927 [2024-10-08 09:25:16.522762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.927 [2024-10-08 09:25:16.522773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.927 [2024-10-08 09:25:16.522781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.927 [2024-10-08 09:25:16.522789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.927 [2024-10-08 09:25:16.522805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.927 [2024-10-08 09:25:16.522813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.927 [2024-10-08 09:25:16.522821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.927 [2024-10-08 09:25:16.522828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.927 [2024-10-08 09:25:16.603689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:24.928 [2024-10-08 09:25:16.603742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.928 [2024-10-08 09:25:16.603754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:24.928 [2024-10-08 09:25:16.603762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.186 [2024-10-08 09:25:16.669470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.186 [2024-10-08 09:25:16.669697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:25.186 [2024-10-08 09:25:16.669714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.186 [2024-10-08 09:25:16.669729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.186 [2024-10-08 09:25:16.669811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.186 [2024-10-08 09:25:16.669821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:25.186 [2024-10-08 09:25:16.669830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.186 [2024-10-08 09:25:16.669838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.186 [2024-10-08 09:25:16.669873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.186 [2024-10-08 09:25:16.669882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:25.186 [2024-10-08 09:25:16.669891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.186 [2024-10-08 09:25:16.669898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.186 [2024-10-08 09:25:16.669994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.186 [2024-10-08 09:25:16.670005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:25.186 [2024-10-08 09:25:16.670013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.186 [2024-10-08 09:25:16.670021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.186 [2024-10-08 09:25:16.670050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.186 [2024-10-08 09:25:16.670059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:25.186 [2024-10-08 09:25:16.670067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.186 [2024-10-08 09:25:16.670075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.186 [2024-10-08 09:25:16.670118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.186 [2024-10-08 09:25:16.670129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:25.186 [2024-10-08 09:25:16.670138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.186 [2024-10-08 09:25:16.670146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.186 [2024-10-08 09:25:16.670193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.186 [2024-10-08 09:25:16.670204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:25.186 [2024-10-08 09:25:16.670212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.186 [2024-10-08 09:25:16.670220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.186 [2024-10-08 09:25:16.670341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 427.915 ms, result 0 00:19:27.745 00:19:27.745 00:19:27.745 09:25:18 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:19:27.745 [2024-10-08 09:25:18.934334] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:19:27.745 [2024-10-08 09:25:18.934661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75597 ] 00:19:27.745 [2024-10-08 09:25:19.082977] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.745 [2024-10-08 09:25:19.295738] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.004 [2024-10-08 09:25:19.566103] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:28.004 [2024-10-08 09:25:19.566175] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:28.267 [2024-10-08 09:25:19.721887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.267 [2024-10-08 09:25:19.721947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:28.267 [2024-10-08 09:25:19.721960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.267 [2024-10-08 09:25:19.721968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.267 [2024-10-08 09:25:19.722016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.267 [2024-10-08 09:25:19.722025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.267 [2024-10-08 09:25:19.722032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:28.267 [2024-10-08 09:25:19.722038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.267 [2024-10-08 09:25:19.722054] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:28.267 [2024-10-08 09:25:19.722636] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:28.267 [2024-10-08 09:25:19.722651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.267 [2024-10-08 09:25:19.722658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.267 [2024-10-08 09:25:19.722665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:19:28.267 [2024-10-08 09:25:19.722671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.267 [2024-10-08 09:25:19.723944] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:28.267 [2024-10-08 09:25:19.733999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.267 [2024-10-08 09:25:19.734027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:28.267 [2024-10-08 09:25:19.734037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.056 ms 00:19:28.267 [2024-10-08 09:25:19.734043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.267 [2024-10-08 09:25:19.734092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.267 [2024-10-08 09:25:19.734099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:28.268 [2024-10-08 09:25:19.734106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:28.268 [2024-10-08 09:25:19.734112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.268 [2024-10-08 09:25:19.740274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.268 [2024-10-08 09:25:19.740298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.268 [2024-10-08 09:25:19.740307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.121 ms 00:19:28.268 [2024-10-08 09:25:19.740313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.268 [2024-10-08 09:25:19.740374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.268 [2024-10-08 09:25:19.740381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.268 [2024-10-08 09:25:19.740399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:28.268 [2024-10-08 09:25:19.740406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.268 [2024-10-08 09:25:19.740453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.268 [2024-10-08 09:25:19.740468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:28.268 [2024-10-08 09:25:19.740475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:28.268 [2024-10-08 09:25:19.740482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.268 [2024-10-08 09:25:19.740501] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:28.268 [2024-10-08 09:25:19.743591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.268 [2024-10-08 09:25:19.743612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.268 [2024-10-08 09:25:19.743620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.095 ms 00:19:28.268 [2024-10-08 09:25:19.743626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.268 [2024-10-08 09:25:19.743651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.268 [2024-10-08 09:25:19.743658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:28.268 [2024-10-08 09:25:19.743665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:28.268 [2024-10-08 09:25:19.743671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.268 [2024-10-08 09:25:19.743691] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:28.268 [2024-10-08 09:25:19.743708] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:28.268 [2024-10-08 09:25:19.743738] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:28.268 [2024-10-08 09:25:19.743752] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:28.268 [2024-10-08 09:25:19.743836] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:28.268 [2024-10-08 09:25:19.743846] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:28.268 [2024-10-08 09:25:19.743854] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:28.268 [2024-10-08 09:25:19.743864] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:28.268 [2024-10-08 09:25:19.743871] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:28.268 [2024-10-08 09:25:19.743882] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:28.268 [2024-10-08 09:25:19.743889] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:28.268 [2024-10-08 09:25:19.743895] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:28.269 [2024-10-08 09:25:19.743901] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:28.269 [2024-10-08 09:25:19.743907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.269 [2024-10-08 09:25:19.743914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:28.269 [2024-10-08 09:25:19.743920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:19:28.269 [2024-10-08 09:25:19.743927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.269 [2024-10-08 09:25:19.743991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.269 [2024-10-08 09:25:19.744000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:28.269 [2024-10-08 09:25:19.744008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:28.269 [2024-10-08 09:25:19.744014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.269 [2024-10-08 09:25:19.744094] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:28.269 [2024-10-08 09:25:19.744102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:28.269 [2024-10-08 09:25:19.744109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.269 [2024-10-08 09:25:19.744115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.269 [2024-10-08 09:25:19.744122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:28.269 [2024-10-08 09:25:19.744127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:28.269 [2024-10-08 09:25:19.744133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:28.269 [2024-10-08 09:25:19.744140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:28.269 [2024-10-08 09:25:19.744146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:28.269 [2024-10-08 09:25:19.744152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.269 [2024-10-08 09:25:19.744157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:28.269 [2024-10-08 09:25:19.744162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:28.269 [2024-10-08 09:25:19.744168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.269 [2024-10-08 09:25:19.744180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:28.269 [2024-10-08 09:25:19.744185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:28.269 [2024-10-08 09:25:19.744190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.269 [2024-10-08 09:25:19.744195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:28.269 [2024-10-08 09:25:19.744201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:28.269 [2024-10-08 09:25:19.744208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.269 [2024-10-08 09:25:19.744215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:28.269 [2024-10-08 09:25:19.744220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:28.269 [2024-10-08 09:25:19.744226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.269 [2024-10-08 09:25:19.744231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:28.269 [2024-10-08 09:25:19.744236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:28.269 [2024-10-08 09:25:19.744241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.269 [2024-10-08 09:25:19.744247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:28.269 [2024-10-08 09:25:19.744252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:28.270 [2024-10-08 09:25:19.744257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.270 [2024-10-08 09:25:19.744262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:28.270 [2024-10-08 09:25:19.744268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:28.270 [2024-10-08 09:25:19.744273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.270 [2024-10-08 09:25:19.744278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:28.270 [2024-10-08 09:25:19.744284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:28.270 [2024-10-08 09:25:19.744290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.270 [2024-10-08 09:25:19.744295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:28.270 [2024-10-08 09:25:19.744300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:28.270 [2024-10-08 09:25:19.744305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.270 [2024-10-08 09:25:19.744311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:28.270 [2024-10-08 09:25:19.744317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:28.270 [2024-10-08 09:25:19.744322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.270 [2024-10-08 09:25:19.744327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:28.270 [2024-10-08 09:25:19.744332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:28.270 [2024-10-08 09:25:19.744338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.270 [2024-10-08 09:25:19.744343] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:28.270 [2024-10-08 09:25:19.744349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:28.270 [2024-10-08 09:25:19.744357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.270 [2024-10-08 09:25:19.744364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.270 [2024-10-08 09:25:19.744369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:28.270 [2024-10-08 09:25:19.744375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:28.270 [2024-10-08 09:25:19.744380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:28.270 [2024-10-08 09:25:19.744397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:28.270 [2024-10-08 09:25:19.744403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:28.270 [2024-10-08 09:25:19.744409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:28.270 [2024-10-08 09:25:19.744415] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:28.270 [2024-10-08 09:25:19.744423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.270 [2024-10-08 09:25:19.744431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:28.270 [2024-10-08 09:25:19.744437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:28.270 [2024-10-08 09:25:19.744443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:28.270 [2024-10-08 09:25:19.744449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:28.271 [2024-10-08 09:25:19.744455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:28.271 [2024-10-08 09:25:19.744460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:28.271 [2024-10-08 09:25:19.744466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:28.271 [2024-10-08 09:25:19.744473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:28.271 [2024-10-08 09:25:19.744479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:28.271 [2024-10-08 09:25:19.744484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:28.271 [2024-10-08 09:25:19.744490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:28.271 [2024-10-08 09:25:19.744496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:28.271 [2024-10-08 09:25:19.744501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:28.271 [2024-10-08 09:25:19.744508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:28.271 [2024-10-08 09:25:19.744514] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:28.271 [2024-10-08 09:25:19.744520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.271 [2024-10-08 09:25:19.744528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:28.271 [2024-10-08 09:25:19.744535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:28.271 [2024-10-08 09:25:19.744541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:28.271 [2024-10-08 09:25:19.744547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:28.271 [2024-10-08 09:25:19.744553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.271 [2024-10-08 09:25:19.744559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:28.271 [2024-10-08 09:25:19.744565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:19:28.271 [2024-10-08 09:25:19.744570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.271 [2024-10-08 09:25:19.779502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.271 [2024-10-08 09:25:19.779549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.271 [2024-10-08 09:25:19.779567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.882 ms 00:19:28.271 [2024-10-08 09:25:19.779580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.271 [2024-10-08 09:25:19.779712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.271 [2024-10-08 09:25:19.779726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:28.272 [2024-10-08 09:25:19.779738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:28.272 [2024-10-08 09:25:19.779750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.272 [2024-10-08 09:25:19.805856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.272 [2024-10-08 09:25:19.805884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.272 [2024-10-08 09:25:19.805895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.027 ms 00:19:28.272 [2024-10-08 09:25:19.805902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.272 [2024-10-08 09:25:19.805936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.272 [2024-10-08 09:25:19.805943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.272 [2024-10-08 09:25:19.805951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:28.272 [2024-10-08 09:25:19.805957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.272 [2024-10-08 09:25:19.806359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.272 [2024-10-08 09:25:19.806380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.272 [2024-10-08 09:25:19.806397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:19:28.272 [2024-10-08 09:25:19.806407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.272 [2024-10-08 09:25:19.806521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.272 [2024-10-08 09:25:19.806528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.272 [2024-10-08 09:25:19.806535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:28.272 [2024-10-08 09:25:19.806542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.272 [2024-10-08 09:25:19.817648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.272 [2024-10-08 09:25:19.817669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.272 [2024-10-08 09:25:19.817678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.089 ms 00:19:28.272 [2024-10-08 09:25:19.817684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.272 [2024-10-08 09:25:19.827909] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:19:28.272 [2024-10-08 09:25:19.827935] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:28.272 [2024-10-08 09:25:19.827945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.272 [2024-10-08 09:25:19.827953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:28.272 [2024-10-08 09:25:19.827961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.162 ms 00:19:28.272 [2024-10-08 09:25:19.827967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.272 [2024-10-08 09:25:19.846506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.272 [2024-10-08 09:25:19.846535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:28.272 [2024-10-08 09:25:19.846544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.494 ms 00:19:28.272 [2024-10-08 09:25:19.846552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.272 [2024-10-08 09:25:19.855520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.272 [2024-10-08 09:25:19.855547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:28.272 [2024-10-08 09:25:19.855555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.930 ms 00:19:28.272 [2024-10-08 09:25:19.855561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.272 [2024-10-08 09:25:19.864271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.272 [2024-10-08 09:25:19.864295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:28.273 [2024-10-08 09:25:19.864303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.681 ms 00:19:28.273 [2024-10-08 09:25:19.864309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.273 [2024-10-08 09:25:19.864806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.273 [2024-10-08 09:25:19.864824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:28.273 [2024-10-08 09:25:19.864832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:19:28.273 [2024-10-08 09:25:19.864838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.273 [2024-10-08 09:25:19.912736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.273 [2024-10-08 09:25:19.912785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:28.273 [2024-10-08 09:25:19.912797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.880 ms 00:19:28.273 [2024-10-08 09:25:19.912804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.273 [2024-10-08 09:25:19.921089] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:28.273 [2024-10-08 09:25:19.923726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.273 [2024-10-08 09:25:19.923750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:28.273 [2024-10-08 09:25:19.923760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.862 ms 00:19:28.273 [2024-10-08 09:25:19.923771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.273 [2024-10-08 09:25:19.923857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.273 [2024-10-08 09:25:19.923866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:28.273 [2024-10-08 09:25:19.923874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:28.273 [2024-10-08 09:25:19.923881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.273 [2024-10-08 09:25:19.925295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.273 [2024-10-08 09:25:19.925320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:28.273 [2024-10-08 09:25:19.925328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:19:28.273 [2024-10-08 09:25:19.925336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.273 [2024-10-08 09:25:19.925364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.273 [2024-10-08 09:25:19.925371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:28.273 [2024-10-08 09:25:19.925378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:28.273 [2024-10-08 09:25:19.925385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.273 [2024-10-08 09:25:19.925431] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:28.273 [2024-10-08 09:25:19.925441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.273 [2024-10-08 09:25:19.925448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:28.273 [2024-10-08 09:25:19.925454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:28.273 [2024-10-08 09:25:19.925464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.273 [2024-10-08 09:25:19.943665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.273 [2024-10-08 09:25:19.943694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:28.273 [2024-10-08 09:25:19.943704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.185 ms 00:19:28.273 [2024-10-08 09:25:19.943710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.273 [2024-10-08 09:25:19.943777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.273 [2024-10-08 09:25:19.943784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:28.273 [2024-10-08 09:25:19.943792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:28.274 [2024-10-08 09:25:19.943798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.274 [2024-10-08 09:25:19.944989] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.710 ms, result 0 00:19:29.651  [2024-10-08T09:25:22.271Z] Copying: 43/1024 [MB] (43 MBps) [2024-10-08T09:25:23.210Z] Copying: 77/1024 [MB] (34 MBps) [2024-10-08T09:25:24.148Z] Copying: 102/1024 [MB] (24 MBps) [2024-10-08T09:25:25.527Z] Copying: 129/1024 [MB] (26 MBps) [2024-10-08T09:25:26.470Z] Copying: 154/1024 [MB] (25 MBps) [2024-10-08T09:25:27.408Z] Copying: 179/1024 [MB] (24 MBps) [2024-10-08T09:25:28.349Z] Copying: 207/1024 [MB] (28 MBps) [2024-10-08T09:25:29.289Z] Copying: 243/1024 [MB] (35 MBps) [2024-10-08T09:25:30.229Z] Copying: 268/1024 [MB] (24 MBps) [2024-10-08T09:25:31.169Z] Copying: 292/1024 [MB] (23 MBps) [2024-10-08T09:25:32.551Z] Copying: 311/1024 [MB] (18 MBps) [2024-10-08T09:25:33.492Z] Copying: 324/1024 [MB] (12 MBps) [2024-10-08T09:25:34.430Z] Copying: 340/1024 [MB] (16 MBps) [2024-10-08T09:25:35.364Z] Copying: 355/1024 [MB] (14 MBps) [2024-10-08T09:25:36.302Z] Copying: 373/1024 [MB] (18 MBps) [2024-10-08T09:25:37.245Z] Copying: 386/1024 [MB] (12 MBps) [2024-10-08T09:25:38.182Z] Copying: 396/1024 [MB] (10 MBps) [2024-10-08T09:25:39.567Z] Copying: 417/1024 [MB] (20 MBps) [2024-10-08T09:25:40.510Z] Copying: 435/1024 [MB] (18 MBps) [2024-10-08T09:25:41.499Z] Copying: 446/1024 [MB] (10 MBps) [2024-10-08T09:25:42.453Z] Copying: 456/1024 [MB] (10 MBps) [2024-10-08T09:25:43.389Z] Copying: 468/1024 [MB] (11 MBps) [2024-10-08T09:25:44.325Z] Copying: 481/1024 [MB] (13 MBps) [2024-10-08T09:25:45.258Z] Copying: 494/1024 [MB] (12 MBps) [2024-10-08T09:25:46.192Z] Copying: 505/1024 [MB] (11 MBps) [2024-10-08T09:25:47.566Z] Copying: 518/1024 [MB] (12 MBps) [2024-10-08T09:25:48.536Z] Copying: 531/1024 [MB] (12 MBps) [2024-10-08T09:25:49.471Z] Copying: 543/1024 [MB] (12 MBps) [2024-10-08T09:25:50.429Z] Copying: 556/1024 [MB] (12 MBps) [2024-10-08T09:25:51.377Z] Copying: 570/1024 [MB] (13 MBps) [2024-10-08T09:25:52.316Z] Copying: 583/1024 [MB] (13 MBps) [2024-10-08T09:25:53.249Z] Copying: 594/1024 [MB] (10 MBps) [2024-10-08T09:25:54.183Z] Copying: 618/1024 [MB] (23 MBps) [2024-10-08T09:25:55.556Z] Copying: 632/1024 [MB] (14 MBps) [2024-10-08T09:25:56.490Z] Copying: 645/1024 [MB] (13 MBps) [2024-10-08T09:25:57.423Z] Copying: 659/1024 [MB] (13 MBps) [2024-10-08T09:25:58.358Z] Copying: 672/1024 [MB] (13 MBps) [2024-10-08T09:25:59.301Z] Copying: 691/1024 [MB] (18 MBps) [2024-10-08T09:26:00.243Z] Copying: 703/1024 [MB] (11 MBps) [2024-10-08T09:26:01.185Z] Copying: 713/1024 [MB] (10 MBps) [2024-10-08T09:26:02.571Z] Copying: 724/1024 [MB] (10 MBps) [2024-10-08T09:26:03.512Z] Copying: 744/1024 [MB] (20 MBps) [2024-10-08T09:26:04.456Z] Copying: 755/1024 [MB] (11 MBps) [2024-10-08T09:26:05.398Z] Copying: 767/1024 [MB] (11 MBps) [2024-10-08T09:26:06.342Z] Copying: 779/1024 [MB] (12 MBps) [2024-10-08T09:26:07.282Z] Copying: 796/1024 [MB] (16 MBps) [2024-10-08T09:26:08.224Z] Copying: 813/1024 [MB] (17 MBps) [2024-10-08T09:26:09.167Z] Copying: 826/1024 [MB] (13 MBps) [2024-10-08T09:26:10.551Z] Copying: 837/1024 [MB] (10 MBps) [2024-10-08T09:26:11.494Z] Copying: 856/1024 [MB] (19 MBps) [2024-10-08T09:26:12.480Z] Copying: 877/1024 [MB] (21 MBps) [2024-10-08T09:26:13.423Z] Copying: 888/1024 [MB] (10 MBps) [2024-10-08T09:26:14.367Z] Copying: 899/1024 [MB] (10 MBps) [2024-10-08T09:26:15.311Z] Copying: 912/1024 [MB] (12 MBps) [2024-10-08T09:26:16.256Z] Copying: 924/1024 [MB] (12 MBps) [2024-10-08T09:26:17.200Z] Copying: 944/1024 [MB] (19 MBps) [2024-10-08T09:26:18.145Z] Copying: 958/1024 [MB] (14 MBps) [2024-10-08T09:26:19.532Z] Copying: 970/1024 [MB] (11 MBps) [2024-10-08T09:26:20.477Z] Copying: 990/1024 [MB] (20 MBps) [2024-10-08T09:26:21.418Z] Copying: 1001/1024 [MB] (10 MBps) [2024-10-08T09:26:21.418Z] Copying: 1023/1024 [MB] (22 MBps) [2024-10-08T09:26:21.680Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-08 09:26:21.450498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.997 [2024-10-08 09:26:21.450610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:29.997 [2024-10-08 09:26:21.450630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:29.997 [2024-10-08 09:26:21.450640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.997 [2024-10-08 09:26:21.450671] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.997 [2024-10-08 09:26:21.454080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.997 [2024-10-08 09:26:21.454133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:29.997 [2024-10-08 09:26:21.454147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.390 ms 00:20:29.997 [2024-10-08 09:26:21.454166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.997 [2024-10-08 09:26:21.454446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.997 [2024-10-08 09:26:21.454470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:29.997 [2024-10-08 09:26:21.454482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:20:29.997 [2024-10-08 09:26:21.454492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.997 [2024-10-08 09:26:21.459213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.997 [2024-10-08 09:26:21.459266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:29.997 [2024-10-08 09:26:21.459280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.703 ms 00:20:29.997 [2024-10-08 09:26:21.459291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.997 [2024-10-08 09:26:21.466613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.997 [2024-10-08 09:26:21.466669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:29.997 [2024-10-08 09:26:21.466681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.279 ms 00:20:29.997 [2024-10-08 09:26:21.466691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.997 [2024-10-08 09:26:21.497269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.997 [2024-10-08 09:26:21.497322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:29.997 [2024-10-08 09:26:21.497337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.498 ms 00:20:29.997 [2024-10-08 09:26:21.497347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.997 [2024-10-08 09:26:21.522409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.997 [2024-10-08 09:26:21.522489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:29.997 [2024-10-08 09:26:21.522508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.974 ms 00:20:29.997 [2024-10-08 09:26:21.522518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.258 [2024-10-08 09:26:21.888701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.258 [2024-10-08 09:26:21.888785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:30.258 [2024-10-08 09:26:21.888811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 366.109 ms 00:20:30.258 [2024-10-08 09:26:21.888820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.258 [2024-10-08 09:26:21.916060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.258 [2024-10-08 09:26:21.916113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:30.258 [2024-10-08 09:26:21.916128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.223 ms 00:20:30.258 [2024-10-08 09:26:21.916137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.258 [2024-10-08 09:26:21.941716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.258 [2024-10-08 09:26:21.941768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:30.258 [2024-10-08 09:26:21.941781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.531 ms 00:20:30.258 [2024-10-08 09:26:21.941789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.521 [2024-10-08 09:26:21.967211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.521 [2024-10-08 09:26:21.967265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:30.521 [2024-10-08 09:26:21.967279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.373 ms 00:20:30.521 [2024-10-08 09:26:21.967287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.521 [2024-10-08 09:26:21.992129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.521 [2024-10-08 09:26:21.992192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:30.521 [2024-10-08 09:26:21.992205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.751 ms 00:20:30.521 [2024-10-08 09:26:21.992213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.521 [2024-10-08 09:26:21.992262] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:30.521 [2024-10-08 09:26:21.992279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:20:30.521 [2024-10-08 09:26:21.992291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:30.521 [2024-10-08 09:26:21.992300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:30.521 [2024-10-08 09:26:21.992309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:30.521 [2024-10-08 09:26:21.992318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:30.521 [2024-10-08 09:26:21.992328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:30.521 [2024-10-08 09:26:21.992336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:30.521 [2024-10-08 09:26:21.992345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.992999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:30.522 [2024-10-08 09:26:21.993106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:30.523 [2024-10-08 09:26:21.993122] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:30.523 [2024-10-08 09:26:21.993132] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ebde3831-2620-4969-b6d1-8149682e8f6d 00:20:30.523 [2024-10-08 09:26:21.993147] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:20:30.523 [2024-10-08 09:26:21.993155] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 11712 00:20:30.523 [2024-10-08 09:26:21.993162] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 10752 00:20:30.523 [2024-10-08 09:26:21.993171] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0893 00:20:30.523 [2024-10-08 09:26:21.993179] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:30.523 [2024-10-08 09:26:21.993189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:30.523 [2024-10-08 09:26:21.993198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:30.523 [2024-10-08 09:26:21.993205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:30.523 [2024-10-08 09:26:21.993211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:30.523 [2024-10-08 09:26:21.993219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.523 [2024-10-08 09:26:21.993227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:30.523 [2024-10-08 09:26:21.993244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:20:30.523 [2024-10-08 09:26:21.993252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.007164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.523 [2024-10-08 09:26:22.007212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:30.523 [2024-10-08 09:26:22.007224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.891 ms 00:20:30.523 [2024-10-08 09:26:22.007232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.007688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.523 [2024-10-08 09:26:22.007711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:30.523 [2024-10-08 09:26:22.007722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:20:30.523 [2024-10-08 09:26:22.007738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.039663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.523 [2024-10-08 09:26:22.039717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:30.523 [2024-10-08 09:26:22.039731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.523 [2024-10-08 09:26:22.039740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.039812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.523 [2024-10-08 09:26:22.039822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:30.523 [2024-10-08 09:26:22.039832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.523 [2024-10-08 09:26:22.039847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.039917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.523 [2024-10-08 09:26:22.039928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:30.523 [2024-10-08 09:26:22.039938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.523 [2024-10-08 09:26:22.039947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.039964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.523 [2024-10-08 09:26:22.039975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:30.523 [2024-10-08 09:26:22.039985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.523 [2024-10-08 09:26:22.039994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.125798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.523 [2024-10-08 09:26:22.125862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:30.523 [2024-10-08 09:26:22.125876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.523 [2024-10-08 09:26:22.125885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.196944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.523 [2024-10-08 09:26:22.197008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:30.523 [2024-10-08 09:26:22.197021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.523 [2024-10-08 09:26:22.197037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.197116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.523 [2024-10-08 09:26:22.197126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:30.523 [2024-10-08 09:26:22.197136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.523 [2024-10-08 09:26:22.197146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.197186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.523 [2024-10-08 09:26:22.197196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:30.523 [2024-10-08 09:26:22.197205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.523 [2024-10-08 09:26:22.197213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.197314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.523 [2024-10-08 09:26:22.197325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:30.523 [2024-10-08 09:26:22.197335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.523 [2024-10-08 09:26:22.197343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.197378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.523 [2024-10-08 09:26:22.197414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:30.523 [2024-10-08 09:26:22.197424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.523 [2024-10-08 09:26:22.197434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.197483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.523 [2024-10-08 09:26:22.197493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:30.523 [2024-10-08 09:26:22.197502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.523 [2024-10-08 09:26:22.197510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.197563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.523 [2024-10-08 09:26:22.197575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:30.523 [2024-10-08 09:26:22.197584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.523 [2024-10-08 09:26:22.197592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.523 [2024-10-08 09:26:22.197733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 747.202 ms, result 0 00:20:31.466 00:20:31.466 00:20:31.466 09:26:23 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:34.014 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74581 00:20:34.014 09:26:25 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74581 ']' 00:20:34.014 09:26:25 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74581 00:20:34.014 Process with pid 74581 is not found 00:20:34.014 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74581) - No such process 00:20:34.014 09:26:25 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 74581 is not found' 00:20:34.014 Remove shared memory files 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:34.014 09:26:25 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:20:34.014 00:20:34.014 real 2m41.463s 00:20:34.014 user 2m30.721s 00:20:34.014 sys 0m11.793s 00:20:34.014 09:26:25 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:34.014 ************************************ 00:20:34.014 END TEST ftl_restore 00:20:34.014 09:26:25 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:34.014 ************************************ 00:20:34.014 09:26:25 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:34.014 09:26:25 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:34.014 09:26:25 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:34.014 09:26:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:34.015 ************************************ 00:20:34.015 START TEST ftl_dirty_shutdown 00:20:34.015 ************************************ 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:34.015 * Looking for test storage... 00:20:34.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:34.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.015 --rc genhtml_branch_coverage=1 00:20:34.015 --rc genhtml_function_coverage=1 00:20:34.015 --rc genhtml_legend=1 00:20:34.015 --rc geninfo_all_blocks=1 00:20:34.015 --rc geninfo_unexecuted_blocks=1 00:20:34.015 00:20:34.015 ' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:34.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.015 --rc genhtml_branch_coverage=1 00:20:34.015 --rc genhtml_function_coverage=1 00:20:34.015 --rc genhtml_legend=1 00:20:34.015 --rc geninfo_all_blocks=1 00:20:34.015 --rc geninfo_unexecuted_blocks=1 00:20:34.015 00:20:34.015 ' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:34.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.015 --rc genhtml_branch_coverage=1 00:20:34.015 --rc genhtml_function_coverage=1 00:20:34.015 --rc genhtml_legend=1 00:20:34.015 --rc geninfo_all_blocks=1 00:20:34.015 --rc geninfo_unexecuted_blocks=1 00:20:34.015 00:20:34.015 ' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:34.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.015 --rc genhtml_branch_coverage=1 00:20:34.015 --rc genhtml_function_coverage=1 00:20:34.015 --rc genhtml_legend=1 00:20:34.015 --rc geninfo_all_blocks=1 00:20:34.015 --rc geninfo_unexecuted_blocks=1 00:20:34.015 00:20:34.015 ' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=76349 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76349 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 76349 ']' 00:20:34.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:34.015 09:26:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:34.276 [2024-10-08 09:26:25.726267] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:20:34.276 [2024-10-08 09:26:25.726433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76349 ] 00:20:34.276 [2024-10-08 09:26:25.878898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.537 [2024-10-08 09:26:26.074539] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.109 09:26:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:35.109 09:26:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:20:35.109 09:26:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:35.109 09:26:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:20:35.109 09:26:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:35.109 09:26:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:20:35.109 09:26:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:20:35.109 09:26:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:35.681 { 00:20:35.681 "name": "nvme0n1", 00:20:35.681 "aliases": [ 00:20:35.681 "b0966661-e14d-4b0a-9b9d-d0cf80d11168" 00:20:35.681 ], 00:20:35.681 "product_name": "NVMe disk", 00:20:35.681 "block_size": 4096, 00:20:35.681 "num_blocks": 1310720, 00:20:35.681 "uuid": "b0966661-e14d-4b0a-9b9d-d0cf80d11168", 00:20:35.681 "numa_id": -1, 00:20:35.681 "assigned_rate_limits": { 00:20:35.681 "rw_ios_per_sec": 0, 00:20:35.681 "rw_mbytes_per_sec": 0, 00:20:35.681 "r_mbytes_per_sec": 0, 00:20:35.681 "w_mbytes_per_sec": 0 00:20:35.681 }, 00:20:35.681 "claimed": true, 00:20:35.681 "claim_type": "read_many_write_one", 00:20:35.681 "zoned": false, 00:20:35.681 "supported_io_types": { 00:20:35.681 "read": true, 00:20:35.681 "write": true, 00:20:35.681 "unmap": true, 00:20:35.681 "flush": true, 00:20:35.681 "reset": true, 00:20:35.681 "nvme_admin": true, 00:20:35.681 "nvme_io": true, 00:20:35.681 "nvme_io_md": false, 00:20:35.681 "write_zeroes": true, 00:20:35.681 "zcopy": false, 00:20:35.681 "get_zone_info": false, 00:20:35.681 "zone_management": false, 00:20:35.681 "zone_append": false, 00:20:35.681 "compare": true, 00:20:35.681 "compare_and_write": false, 00:20:35.681 "abort": true, 00:20:35.681 "seek_hole": false, 00:20:35.681 "seek_data": false, 00:20:35.681 "copy": true, 00:20:35.681 "nvme_iov_md": false 00:20:35.681 }, 00:20:35.681 "driver_specific": { 00:20:35.681 "nvme": [ 00:20:35.681 { 00:20:35.681 "pci_address": "0000:00:11.0", 00:20:35.681 "trid": { 00:20:35.681 "trtype": "PCIe", 00:20:35.681 "traddr": "0000:00:11.0" 00:20:35.681 }, 00:20:35.681 "ctrlr_data": { 00:20:35.681 "cntlid": 0, 00:20:35.681 "vendor_id": "0x1b36", 00:20:35.681 "model_number": "QEMU NVMe Ctrl", 00:20:35.681 "serial_number": "12341", 00:20:35.681 "firmware_revision": "8.0.0", 00:20:35.681 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:35.681 "oacs": { 00:20:35.681 "security": 0, 00:20:35.681 "format": 1, 00:20:35.681 "firmware": 0, 00:20:35.681 "ns_manage": 1 00:20:35.681 }, 00:20:35.681 "multi_ctrlr": false, 00:20:35.681 "ana_reporting": false 00:20:35.681 }, 00:20:35.681 "vs": { 00:20:35.681 "nvme_version": "1.4" 00:20:35.681 }, 00:20:35.681 "ns_data": { 00:20:35.681 "id": 1, 00:20:35.681 "can_share": false 00:20:35.681 } 00:20:35.681 } 00:20:35.681 ], 00:20:35.681 "mp_policy": "active_passive" 00:20:35.681 } 00:20:35.681 } 00:20:35.681 ]' 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:35.681 09:26:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:35.942 09:26:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=9df07b85-6787-438f-a02d-a56ab67e7b44 00:20:35.942 09:26:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:20:35.942 09:26:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9df07b85-6787-438f-a02d-a56ab67e7b44 00:20:36.213 09:26:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:36.510 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=ef44fc76-3933-4f32-b1ec-f580351013b5 00:20:36.510 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ef44fc76-3933-4f32-b1ec-f580351013b5 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=e9f7d745-49b2-40b4-932a-6a72a35ab744 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e9f7d745-49b2-40b4-932a-6a72a35ab744 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=e9f7d745-49b2-40b4-932a-6a72a35ab744 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size e9f7d745-49b2-40b4-932a-6a72a35ab744 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=e9f7d745-49b2-40b4-932a-6a72a35ab744 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9f7d745-49b2-40b4-932a-6a72a35ab744 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:36.783 { 00:20:36.783 "name": "e9f7d745-49b2-40b4-932a-6a72a35ab744", 00:20:36.783 "aliases": [ 00:20:36.783 "lvs/nvme0n1p0" 00:20:36.783 ], 00:20:36.783 "product_name": "Logical Volume", 00:20:36.783 "block_size": 4096, 00:20:36.783 "num_blocks": 26476544, 00:20:36.783 "uuid": "e9f7d745-49b2-40b4-932a-6a72a35ab744", 00:20:36.783 "assigned_rate_limits": { 00:20:36.783 "rw_ios_per_sec": 0, 00:20:36.783 "rw_mbytes_per_sec": 0, 00:20:36.783 "r_mbytes_per_sec": 0, 00:20:36.783 "w_mbytes_per_sec": 0 00:20:36.783 }, 00:20:36.783 "claimed": false, 00:20:36.783 "zoned": false, 00:20:36.783 "supported_io_types": { 00:20:36.783 "read": true, 00:20:36.783 "write": true, 00:20:36.783 "unmap": true, 00:20:36.783 "flush": false, 00:20:36.783 "reset": true, 00:20:36.783 "nvme_admin": false, 00:20:36.783 "nvme_io": false, 00:20:36.783 "nvme_io_md": false, 00:20:36.783 "write_zeroes": true, 00:20:36.783 "zcopy": false, 00:20:36.783 "get_zone_info": false, 00:20:36.783 "zone_management": false, 00:20:36.783 "zone_append": false, 00:20:36.783 "compare": false, 00:20:36.783 "compare_and_write": false, 00:20:36.783 "abort": false, 00:20:36.783 "seek_hole": true, 00:20:36.783 "seek_data": true, 00:20:36.783 "copy": false, 00:20:36.783 "nvme_iov_md": false 00:20:36.783 }, 00:20:36.783 "driver_specific": { 00:20:36.783 "lvol": { 00:20:36.783 "lvol_store_uuid": "ef44fc76-3933-4f32-b1ec-f580351013b5", 00:20:36.783 "base_bdev": "nvme0n1", 00:20:36.783 "thin_provision": true, 00:20:36.783 "num_allocated_clusters": 0, 00:20:36.783 "snapshot": false, 00:20:36.783 "clone": false, 00:20:36.783 "esnap_clone": false 00:20:36.783 } 00:20:36.783 } 00:20:36.783 } 00:20:36.783 ]' 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:36.783 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:37.045 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:37.045 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:37.045 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:37.045 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:20:37.045 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:20:37.045 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:37.306 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:37.306 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:37.306 09:26:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size e9f7d745-49b2-40b4-932a-6a72a35ab744 00:20:37.306 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=e9f7d745-49b2-40b4-932a-6a72a35ab744 00:20:37.306 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:37.306 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:37.306 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:37.306 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9f7d745-49b2-40b4-932a-6a72a35ab744 00:20:37.306 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:37.306 { 00:20:37.306 "name": "e9f7d745-49b2-40b4-932a-6a72a35ab744", 00:20:37.306 "aliases": [ 00:20:37.306 "lvs/nvme0n1p0" 00:20:37.306 ], 00:20:37.306 "product_name": "Logical Volume", 00:20:37.306 "block_size": 4096, 00:20:37.306 "num_blocks": 26476544, 00:20:37.306 "uuid": "e9f7d745-49b2-40b4-932a-6a72a35ab744", 00:20:37.306 "assigned_rate_limits": { 00:20:37.306 "rw_ios_per_sec": 0, 00:20:37.306 "rw_mbytes_per_sec": 0, 00:20:37.306 "r_mbytes_per_sec": 0, 00:20:37.306 "w_mbytes_per_sec": 0 00:20:37.306 }, 00:20:37.306 "claimed": false, 00:20:37.306 "zoned": false, 00:20:37.306 "supported_io_types": { 00:20:37.306 "read": true, 00:20:37.306 "write": true, 00:20:37.306 "unmap": true, 00:20:37.306 "flush": false, 00:20:37.306 "reset": true, 00:20:37.306 "nvme_admin": false, 00:20:37.306 "nvme_io": false, 00:20:37.306 "nvme_io_md": false, 00:20:37.306 "write_zeroes": true, 00:20:37.306 "zcopy": false, 00:20:37.306 "get_zone_info": false, 00:20:37.306 "zone_management": false, 00:20:37.306 "zone_append": false, 00:20:37.306 "compare": false, 00:20:37.306 "compare_and_write": false, 00:20:37.306 "abort": false, 00:20:37.306 "seek_hole": true, 00:20:37.306 "seek_data": true, 00:20:37.306 "copy": false, 00:20:37.306 "nvme_iov_md": false 00:20:37.306 }, 00:20:37.306 "driver_specific": { 00:20:37.306 "lvol": { 00:20:37.306 "lvol_store_uuid": "ef44fc76-3933-4f32-b1ec-f580351013b5", 00:20:37.306 "base_bdev": "nvme0n1", 00:20:37.306 "thin_provision": true, 00:20:37.306 "num_allocated_clusters": 0, 00:20:37.306 "snapshot": false, 00:20:37.306 "clone": false, 00:20:37.306 "esnap_clone": false 00:20:37.306 } 00:20:37.306 } 00:20:37.306 } 00:20:37.306 ]' 00:20:37.306 09:26:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:37.567 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:37.567 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:37.567 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:37.567 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:37.567 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:37.567 09:26:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:20:37.567 09:26:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:37.828 09:26:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:20:37.828 09:26:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size e9f7d745-49b2-40b4-932a-6a72a35ab744 00:20:37.828 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=e9f7d745-49b2-40b4-932a-6a72a35ab744 00:20:37.828 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:37.828 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:37.828 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:37.828 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9f7d745-49b2-40b4-932a-6a72a35ab744 00:20:37.828 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:37.828 { 00:20:37.828 "name": "e9f7d745-49b2-40b4-932a-6a72a35ab744", 00:20:37.828 "aliases": [ 00:20:37.828 "lvs/nvme0n1p0" 00:20:37.828 ], 00:20:37.828 "product_name": "Logical Volume", 00:20:37.828 "block_size": 4096, 00:20:37.828 "num_blocks": 26476544, 00:20:37.828 "uuid": "e9f7d745-49b2-40b4-932a-6a72a35ab744", 00:20:37.828 "assigned_rate_limits": { 00:20:37.828 "rw_ios_per_sec": 0, 00:20:37.828 "rw_mbytes_per_sec": 0, 00:20:37.828 "r_mbytes_per_sec": 0, 00:20:37.828 "w_mbytes_per_sec": 0 00:20:37.828 }, 00:20:37.828 "claimed": false, 00:20:37.828 "zoned": false, 00:20:37.828 "supported_io_types": { 00:20:37.828 "read": true, 00:20:37.828 "write": true, 00:20:37.828 "unmap": true, 00:20:37.828 "flush": false, 00:20:37.828 "reset": true, 00:20:37.828 "nvme_admin": false, 00:20:37.828 "nvme_io": false, 00:20:37.828 "nvme_io_md": false, 00:20:37.828 "write_zeroes": true, 00:20:37.828 "zcopy": false, 00:20:37.828 "get_zone_info": false, 00:20:37.828 "zone_management": false, 00:20:37.828 "zone_append": false, 00:20:37.828 "compare": false, 00:20:37.828 "compare_and_write": false, 00:20:37.828 "abort": false, 00:20:37.828 "seek_hole": true, 00:20:37.828 "seek_data": true, 00:20:37.828 "copy": false, 00:20:37.828 "nvme_iov_md": false 00:20:37.828 }, 00:20:37.828 "driver_specific": { 00:20:37.828 "lvol": { 00:20:37.828 "lvol_store_uuid": "ef44fc76-3933-4f32-b1ec-f580351013b5", 00:20:37.828 "base_bdev": "nvme0n1", 00:20:37.828 "thin_provision": true, 00:20:37.828 "num_allocated_clusters": 0, 00:20:37.828 "snapshot": false, 00:20:37.828 "clone": false, 00:20:37.828 "esnap_clone": false 00:20:37.828 } 00:20:37.828 } 00:20:37.828 } 00:20:37.828 ]' 00:20:37.828 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:37.828 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:37.828 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:38.090 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:38.090 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:38.090 09:26:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:38.090 09:26:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:20:38.090 09:26:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e9f7d745-49b2-40b4-932a-6a72a35ab744 --l2p_dram_limit 10' 00:20:38.090 09:26:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:20:38.090 09:26:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:20:38.090 09:26:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:38.090 09:26:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e9f7d745-49b2-40b4-932a-6a72a35ab744 --l2p_dram_limit 10 -c nvc0n1p0 00:20:38.090 [2024-10-08 09:26:29.730798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.090 [2024-10-08 09:26:29.730844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:38.090 [2024-10-08 09:26:29.730860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:38.090 [2024-10-08 09:26:29.730867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.090 [2024-10-08 09:26:29.730911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.090 [2024-10-08 09:26:29.730919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.090 [2024-10-08 09:26:29.730927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:38.090 [2024-10-08 09:26:29.730933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.090 [2024-10-08 09:26:29.730953] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:38.090 [2024-10-08 09:26:29.731565] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:38.090 [2024-10-08 09:26:29.731589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.090 [2024-10-08 09:26:29.731595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.090 [2024-10-08 09:26:29.731603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:20:38.090 [2024-10-08 09:26:29.731612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.090 [2024-10-08 09:26:29.731639] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 54e93b64-5b47-437d-a677-89097ad5eeb3 00:20:38.090 [2024-10-08 09:26:29.732830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.090 [2024-10-08 09:26:29.732857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:38.090 [2024-10-08 09:26:29.732865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:38.090 [2024-10-08 09:26:29.732873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.090 [2024-10-08 09:26:29.737684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.090 [2024-10-08 09:26:29.737716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.090 [2024-10-08 09:26:29.737723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.751 ms 00:20:38.090 [2024-10-08 09:26:29.737731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.090 [2024-10-08 09:26:29.737800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.090 [2024-10-08 09:26:29.737810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.090 [2024-10-08 09:26:29.737816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:38.090 [2024-10-08 09:26:29.737826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.090 [2024-10-08 09:26:29.737867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.090 [2024-10-08 09:26:29.737877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:38.090 [2024-10-08 09:26:29.737883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:38.090 [2024-10-08 09:26:29.737890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.090 [2024-10-08 09:26:29.737907] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:38.090 [2024-10-08 09:26:29.740783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.090 [2024-10-08 09:26:29.740811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.090 [2024-10-08 09:26:29.740820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.879 ms 00:20:38.090 [2024-10-08 09:26:29.740826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.090 [2024-10-08 09:26:29.740854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.090 [2024-10-08 09:26:29.740860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:38.090 [2024-10-08 09:26:29.740868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:38.090 [2024-10-08 09:26:29.740876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.090 [2024-10-08 09:26:29.740889] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:38.090 [2024-10-08 09:26:29.740993] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:38.090 [2024-10-08 09:26:29.741011] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:38.090 [2024-10-08 09:26:29.741019] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:38.090 [2024-10-08 09:26:29.741030] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:38.090 [2024-10-08 09:26:29.741037] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:38.090 [2024-10-08 09:26:29.741045] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:38.090 [2024-10-08 09:26:29.741051] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:38.090 [2024-10-08 09:26:29.741057] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:38.090 [2024-10-08 09:26:29.741063] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:38.090 [2024-10-08 09:26:29.741071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.090 [2024-10-08 09:26:29.741081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:38.090 [2024-10-08 09:26:29.741088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:20:38.090 [2024-10-08 09:26:29.741094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.091 [2024-10-08 09:26:29.741159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.091 [2024-10-08 09:26:29.741173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:38.091 [2024-10-08 09:26:29.741181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:38.091 [2024-10-08 09:26:29.741186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.091 [2024-10-08 09:26:29.741260] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:38.091 [2024-10-08 09:26:29.741267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:38.091 [2024-10-08 09:26:29.741275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:38.091 [2024-10-08 09:26:29.741280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:38.091 [2024-10-08 09:26:29.741292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:38.091 [2024-10-08 09:26:29.741304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:38.091 [2024-10-08 09:26:29.741311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:38.091 [2024-10-08 09:26:29.741322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:38.091 [2024-10-08 09:26:29.741327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:38.091 [2024-10-08 09:26:29.741333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:38.091 [2024-10-08 09:26:29.741338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:38.091 [2024-10-08 09:26:29.741345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:38.091 [2024-10-08 09:26:29.741350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:38.091 [2024-10-08 09:26:29.741362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:38.091 [2024-10-08 09:26:29.741369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:38.091 [2024-10-08 09:26:29.741380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.091 [2024-10-08 09:26:29.741403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:38.091 [2024-10-08 09:26:29.741410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.091 [2024-10-08 09:26:29.741421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:38.091 [2024-10-08 09:26:29.741427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.091 [2024-10-08 09:26:29.741439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:38.091 [2024-10-08 09:26:29.741445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:38.091 [2024-10-08 09:26:29.741456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:38.091 [2024-10-08 09:26:29.741464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:38.091 [2024-10-08 09:26:29.741475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:38.091 [2024-10-08 09:26:29.741480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:38.091 [2024-10-08 09:26:29.741487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:38.091 [2024-10-08 09:26:29.741492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:38.091 [2024-10-08 09:26:29.741498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:38.091 [2024-10-08 09:26:29.741503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:38.091 [2024-10-08 09:26:29.741514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:38.091 [2024-10-08 09:26:29.741521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741526] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:38.091 [2024-10-08 09:26:29.741533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:38.091 [2024-10-08 09:26:29.741540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:38.091 [2024-10-08 09:26:29.741547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:38.091 [2024-10-08 09:26:29.741553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:38.091 [2024-10-08 09:26:29.741562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:38.091 [2024-10-08 09:26:29.741567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:38.091 [2024-10-08 09:26:29.741574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:38.091 [2024-10-08 09:26:29.741578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:38.091 [2024-10-08 09:26:29.741584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:38.091 [2024-10-08 09:26:29.741592] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:38.091 [2024-10-08 09:26:29.741601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:38.091 [2024-10-08 09:26:29.741609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:38.091 [2024-10-08 09:26:29.741616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:38.091 [2024-10-08 09:26:29.741621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:38.091 [2024-10-08 09:26:29.741627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:38.091 [2024-10-08 09:26:29.741633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:38.091 [2024-10-08 09:26:29.741640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:38.091 [2024-10-08 09:26:29.741645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:38.091 [2024-10-08 09:26:29.741651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:38.091 [2024-10-08 09:26:29.741657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:38.091 [2024-10-08 09:26:29.741664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:38.091 [2024-10-08 09:26:29.741670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:38.091 [2024-10-08 09:26:29.741676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:38.091 [2024-10-08 09:26:29.741682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:38.091 [2024-10-08 09:26:29.741689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:38.091 [2024-10-08 09:26:29.741694] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:38.091 [2024-10-08 09:26:29.741702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:38.091 [2024-10-08 09:26:29.741708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:38.091 [2024-10-08 09:26:29.741716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:38.091 [2024-10-08 09:26:29.741722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:38.091 [2024-10-08 09:26:29.741729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:38.091 [2024-10-08 09:26:29.741735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.091 [2024-10-08 09:26:29.741741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:38.091 [2024-10-08 09:26:29.741747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:20:38.091 [2024-10-08 09:26:29.741753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.091 [2024-10-08 09:26:29.741795] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:38.091 [2024-10-08 09:26:29.741806] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:42.297 [2024-10-08 09:26:33.380155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.380245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:42.297 [2024-10-08 09:26:33.380264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3638.344 ms 00:20:42.297 [2024-10-08 09:26:33.380277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.412072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.412141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.297 [2024-10-08 09:26:33.412156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.529 ms 00:20:42.297 [2024-10-08 09:26:33.412168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.412316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.412330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:42.297 [2024-10-08 09:26:33.412340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:42.297 [2024-10-08 09:26:33.412353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.458698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.458764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.297 [2024-10-08 09:26:33.458783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.285 ms 00:20:42.297 [2024-10-08 09:26:33.458798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.458849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.458864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.297 [2024-10-08 09:26:33.458875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.297 [2024-10-08 09:26:33.458895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.459582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.459625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.297 [2024-10-08 09:26:33.459639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:20:42.297 [2024-10-08 09:26:33.459654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.459790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.459813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.297 [2024-10-08 09:26:33.459825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:20:42.297 [2024-10-08 09:26:33.459840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.476755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.476800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.297 [2024-10-08 09:26:33.476810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.893 ms 00:20:42.297 [2024-10-08 09:26:33.476818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.487364] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:42.297 [2024-10-08 09:26:33.490742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.490783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:42.297 [2024-10-08 09:26:33.490794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.846 ms 00:20:42.297 [2024-10-08 09:26:33.490804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.570417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.570453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:42.297 [2024-10-08 09:26:33.570466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.587 ms 00:20:42.297 [2024-10-08 09:26:33.570473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.570615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.570623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:42.297 [2024-10-08 09:26:33.570634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:42.297 [2024-10-08 09:26:33.570640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.588519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.588551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:42.297 [2024-10-08 09:26:33.588561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.852 ms 00:20:42.297 [2024-10-08 09:26:33.588567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.606632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.606660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:42.297 [2024-10-08 09:26:33.606669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.033 ms 00:20:42.297 [2024-10-08 09:26:33.606675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.607124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.607139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:42.297 [2024-10-08 09:26:33.607147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:20:42.297 [2024-10-08 09:26:33.607153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.669500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.669530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:42.297 [2024-10-08 09:26:33.669543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.321 ms 00:20:42.297 [2024-10-08 09:26:33.669550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.688668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.688697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:42.297 [2024-10-08 09:26:33.688706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.061 ms 00:20:42.297 [2024-10-08 09:26:33.688713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.706918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.706945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:42.297 [2024-10-08 09:26:33.706955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.175 ms 00:20:42.297 [2024-10-08 09:26:33.706961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.725679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.725707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:42.297 [2024-10-08 09:26:33.725716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.688 ms 00:20:42.297 [2024-10-08 09:26:33.725721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.725754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.725761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:42.297 [2024-10-08 09:26:33.725770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.297 [2024-10-08 09:26:33.725777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.725835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.297 [2024-10-08 09:26:33.725843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:42.297 [2024-10-08 09:26:33.725850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:42.297 [2024-10-08 09:26:33.725856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.297 [2024-10-08 09:26:33.726587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3995.445 ms, result 0 00:20:42.297 { 00:20:42.297 "name": "ftl0", 00:20:42.297 "uuid": "54e93b64-5b47-437d-a677-89097ad5eeb3" 00:20:42.297 } 00:20:42.297 09:26:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:20:42.297 09:26:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:42.297 09:26:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:20:42.297 09:26:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:20:42.297 09:26:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:20:42.558 /dev/nbd0 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:20:42.558 1+0 records in 00:20:42.558 1+0 records out 00:20:42.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284861 s, 14.4 MB/s 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:20:42.558 09:26:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:20:42.819 [2024-10-08 09:26:34.253251] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:20:42.819 [2024-10-08 09:26:34.253363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76501 ] 00:20:42.819 [2024-10-08 09:26:34.402590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.079 [2024-10-08 09:26:34.574840] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.467  [2024-10-08T09:26:37.093Z] Copying: 209/1024 [MB] (209 MBps) [2024-10-08T09:26:38.036Z] Copying: 467/1024 [MB] (258 MBps) [2024-10-08T09:26:39.013Z] Copying: 726/1024 [MB] (258 MBps) [2024-10-08T09:26:39.274Z] Copying: 946/1024 [MB] (220 MBps) [2024-10-08T09:26:40.216Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:20:48.533 00:20:48.533 09:26:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:51.077 09:26:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:20:51.077 [2024-10-08 09:26:42.229450] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:20:51.077 [2024-10-08 09:26:42.229550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76589 ] 00:20:51.077 [2024-10-08 09:26:42.371925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.077 [2024-10-08 09:26:42.568418] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.465  [2024-10-08T09:26:45.129Z] Copying: 23/1024 [MB] (23 MBps) [2024-10-08T09:26:46.072Z] Copying: 42/1024 [MB] (19 MBps) [2024-10-08T09:26:47.007Z] Copying: 57/1024 [MB] (14 MBps) [2024-10-08T09:26:47.940Z] Copying: 71/1024 [MB] (14 MBps) [2024-10-08T09:26:48.875Z] Copying: 87/1024 [MB] (16 MBps) [2024-10-08T09:26:49.810Z] Copying: 99/1024 [MB] (11 MBps) [2024-10-08T09:26:51.186Z] Copying: 116/1024 [MB] (17 MBps) [2024-10-08T09:26:52.121Z] Copying: 135/1024 [MB] (18 MBps) [2024-10-08T09:26:53.056Z] Copying: 158/1024 [MB] (23 MBps) [2024-10-08T09:26:53.991Z] Copying: 173/1024 [MB] (14 MBps) [2024-10-08T09:26:54.926Z] Copying: 185/1024 [MB] (12 MBps) [2024-10-08T09:26:55.861Z] Copying: 204/1024 [MB] (18 MBps) [2024-10-08T09:26:57.237Z] Copying: 238/1024 [MB] (34 MBps) [2024-10-08T09:26:57.809Z] Copying: 272/1024 [MB] (34 MBps) [2024-10-08T09:26:59.190Z] Copying: 303/1024 [MB] (30 MBps) [2024-10-08T09:27:00.127Z] Copying: 328/1024 [MB] (24 MBps) [2024-10-08T09:27:01.068Z] Copying: 360/1024 [MB] (32 MBps) [2024-10-08T09:27:02.009Z] Copying: 393/1024 [MB] (32 MBps) [2024-10-08T09:27:02.985Z] Copying: 419/1024 [MB] (25 MBps) [2024-10-08T09:27:03.938Z] Copying: 446/1024 [MB] (26 MBps) [2024-10-08T09:27:04.871Z] Copying: 476/1024 [MB] (30 MBps) [2024-10-08T09:27:05.808Z] Copying: 508/1024 [MB] (32 MBps) [2024-10-08T09:27:07.190Z] Copying: 542/1024 [MB] (33 MBps) [2024-10-08T09:27:08.134Z] Copying: 570/1024 [MB] (28 MBps) [2024-10-08T09:27:09.069Z] Copying: 599/1024 [MB] (28 MBps) [2024-10-08T09:27:10.011Z] Copying: 632/1024 [MB] (32 MBps) [2024-10-08T09:27:10.972Z] Copying: 658/1024 [MB] (26 MBps) [2024-10-08T09:27:11.906Z] Copying: 684/1024 [MB] (25 MBps) [2024-10-08T09:27:12.841Z] Copying: 718/1024 [MB] (34 MBps) [2024-10-08T09:27:14.220Z] Copying: 753/1024 [MB] (34 MBps) [2024-10-08T09:27:15.161Z] Copying: 784/1024 [MB] (31 MBps) [2024-10-08T09:27:16.102Z] Copying: 818/1024 [MB] (33 MBps) [2024-10-08T09:27:17.045Z] Copying: 846/1024 [MB] (28 MBps) [2024-10-08T09:27:17.987Z] Copying: 876/1024 [MB] (29 MBps) [2024-10-08T09:27:18.928Z] Copying: 903/1024 [MB] (27 MBps) [2024-10-08T09:27:19.912Z] Copying: 934/1024 [MB] (30 MBps) [2024-10-08T09:27:20.845Z] Copying: 960/1024 [MB] (26 MBps) [2024-10-08T09:27:21.778Z] Copying: 995/1024 [MB] (35 MBps) [2024-10-08T09:27:22.345Z] Copying: 1024/1024 [MB] (average 26 MBps) 00:21:30.662 00:21:30.662 09:27:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:21:30.662 09:27:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:21:30.919 09:27:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:31.179 [2024-10-08 09:27:22.610203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.179 [2024-10-08 09:27:22.610245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:31.179 [2024-10-08 09:27:22.610258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:31.179 [2024-10-08 09:27:22.610266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.179 [2024-10-08 09:27:22.610284] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:31.179 [2024-10-08 09:27:22.612522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.179 [2024-10-08 09:27:22.612548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:31.179 [2024-10-08 09:27:22.612561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.222 ms 00:21:31.179 [2024-10-08 09:27:22.612568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.179 [2024-10-08 09:27:22.615048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.179 [2024-10-08 09:27:22.615080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:31.179 [2024-10-08 09:27:22.615093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.454 ms 00:21:31.179 [2024-10-08 09:27:22.615099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.179 [2024-10-08 09:27:22.629235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.179 [2024-10-08 09:27:22.629261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:31.179 [2024-10-08 09:27:22.629272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.118 ms 00:21:31.179 [2024-10-08 09:27:22.629278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.180 [2024-10-08 09:27:22.634008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.180 [2024-10-08 09:27:22.634031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:31.180 [2024-10-08 09:27:22.634044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.702 ms 00:21:31.180 [2024-10-08 09:27:22.634051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.180 [2024-10-08 09:27:22.653914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.180 [2024-10-08 09:27:22.653940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:31.180 [2024-10-08 09:27:22.653950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.819 ms 00:21:31.180 [2024-10-08 09:27:22.653956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.180 [2024-10-08 09:27:22.667358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.180 [2024-10-08 09:27:22.667385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:31.180 [2024-10-08 09:27:22.667408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.369 ms 00:21:31.180 [2024-10-08 09:27:22.667414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.180 [2024-10-08 09:27:22.667558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.180 [2024-10-08 09:27:22.667567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:31.180 [2024-10-08 09:27:22.667577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:21:31.180 [2024-10-08 09:27:22.667585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.180 [2024-10-08 09:27:22.685886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.180 [2024-10-08 09:27:22.685910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:31.180 [2024-10-08 09:27:22.685919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.285 ms 00:21:31.180 [2024-10-08 09:27:22.685925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.180 [2024-10-08 09:27:22.703981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.180 [2024-10-08 09:27:22.704007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:31.180 [2024-10-08 09:27:22.704016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.025 ms 00:21:31.180 [2024-10-08 09:27:22.704023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.180 [2024-10-08 09:27:22.721411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.180 [2024-10-08 09:27:22.721436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:31.180 [2024-10-08 09:27:22.721445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.357 ms 00:21:31.180 [2024-10-08 09:27:22.721451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.180 [2024-10-08 09:27:22.739186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.180 [2024-10-08 09:27:22.739210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:31.180 [2024-10-08 09:27:22.739220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.676 ms 00:21:31.180 [2024-10-08 09:27:22.739226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.180 [2024-10-08 09:27:22.739255] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:31.180 [2024-10-08 09:27:22.739268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:31.180 [2024-10-08 09:27:22.739768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:31.181 [2024-10-08 09:27:22.739999] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:31.181 [2024-10-08 09:27:22.740009] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 54e93b64-5b47-437d-a677-89097ad5eeb3 00:21:31.181 [2024-10-08 09:27:22.740015] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:31.181 [2024-10-08 09:27:22.740024] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:31.181 [2024-10-08 09:27:22.740030] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:31.181 [2024-10-08 09:27:22.740037] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:31.181 [2024-10-08 09:27:22.740043] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:31.181 [2024-10-08 09:27:22.740050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:31.181 [2024-10-08 09:27:22.740057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:31.181 [2024-10-08 09:27:22.740063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:31.181 [2024-10-08 09:27:22.740068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:31.181 [2024-10-08 09:27:22.740075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.181 [2024-10-08 09:27:22.740081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:31.181 [2024-10-08 09:27:22.740088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:21:31.181 [2024-10-08 09:27:22.740094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.181 [2024-10-08 09:27:22.750157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.181 [2024-10-08 09:27:22.750180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:31.181 [2024-10-08 09:27:22.750190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.036 ms 00:21:31.181 [2024-10-08 09:27:22.750196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.181 [2024-10-08 09:27:22.750505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.181 [2024-10-08 09:27:22.750514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:31.181 [2024-10-08 09:27:22.750523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:21:31.181 [2024-10-08 09:27:22.750530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.181 [2024-10-08 09:27:22.781211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.181 [2024-10-08 09:27:22.781237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:31.181 [2024-10-08 09:27:22.781247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.181 [2024-10-08 09:27:22.781253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.181 [2024-10-08 09:27:22.781305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.181 [2024-10-08 09:27:22.781311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:31.181 [2024-10-08 09:27:22.781319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.181 [2024-10-08 09:27:22.781326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.181 [2024-10-08 09:27:22.781420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.181 [2024-10-08 09:27:22.781429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:31.181 [2024-10-08 09:27:22.781437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.181 [2024-10-08 09:27:22.781443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.181 [2024-10-08 09:27:22.781460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.181 [2024-10-08 09:27:22.781467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:31.181 [2024-10-08 09:27:22.781475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.181 [2024-10-08 09:27:22.781481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.181 [2024-10-08 09:27:22.843537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.181 [2024-10-08 09:27:22.843567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:31.181 [2024-10-08 09:27:22.843577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.181 [2024-10-08 09:27:22.843584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.440 [2024-10-08 09:27:22.894617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.440 [2024-10-08 09:27:22.894653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:31.440 [2024-10-08 09:27:22.894664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.440 [2024-10-08 09:27:22.894674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.440 [2024-10-08 09:27:22.894748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.440 [2024-10-08 09:27:22.894757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:31.440 [2024-10-08 09:27:22.894766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.440 [2024-10-08 09:27:22.894772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.440 [2024-10-08 09:27:22.894831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.440 [2024-10-08 09:27:22.894841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:31.440 [2024-10-08 09:27:22.894849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.440 [2024-10-08 09:27:22.894855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.440 [2024-10-08 09:27:22.894940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.440 [2024-10-08 09:27:22.894948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:31.440 [2024-10-08 09:27:22.894956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.440 [2024-10-08 09:27:22.894962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.440 [2024-10-08 09:27:22.894991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.440 [2024-10-08 09:27:22.894998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:31.440 [2024-10-08 09:27:22.895007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.440 [2024-10-08 09:27:22.895013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.440 [2024-10-08 09:27:22.895052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.440 [2024-10-08 09:27:22.895060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:31.440 [2024-10-08 09:27:22.895068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.440 [2024-10-08 09:27:22.895075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.440 [2024-10-08 09:27:22.895120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.440 [2024-10-08 09:27:22.895128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:31.440 [2024-10-08 09:27:22.895136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.440 [2024-10-08 09:27:22.895142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.440 [2024-10-08 09:27:22.895264] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 285.023 ms, result 0 00:21:31.440 true 00:21:31.440 09:27:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 76349 00:21:31.440 09:27:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76349 00:21:31.440 09:27:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:21:31.440 [2024-10-08 09:27:22.985993] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:21:31.440 [2024-10-08 09:27:22.986110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77012 ] 00:21:31.699 [2024-10-08 09:27:23.134288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.699 [2024-10-08 09:27:23.305464] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.074  [2024-10-08T09:27:25.690Z] Copying: 254/1024 [MB] (254 MBps) [2024-10-08T09:27:26.625Z] Copying: 511/1024 [MB] (256 MBps) [2024-10-08T09:27:27.560Z] Copying: 765/1024 [MB] (254 MBps) [2024-10-08T09:27:27.560Z] Copying: 1012/1024 [MB] (247 MBps) [2024-10-08T09:27:28.495Z] Copying: 1024/1024 [MB] (average 253 MBps) 00:21:36.812 00:21:36.812 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76349 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:21:36.812 09:27:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:36.812 [2024-10-08 09:27:28.304589] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:21:36.812 [2024-10-08 09:27:28.304711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77071 ] 00:21:36.812 [2024-10-08 09:27:28.454095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.072 [2024-10-08 09:27:28.626745] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.330 [2024-10-08 09:27:28.855253] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:37.330 [2024-10-08 09:27:28.855303] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:37.330 [2024-10-08 09:27:28.918521] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:37.330 [2024-10-08 09:27:28.919041] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:37.330 [2024-10-08 09:27:28.919646] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:37.897 [2024-10-08 09:27:29.386712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.897 [2024-10-08 09:27:29.386740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:37.897 [2024-10-08 09:27:29.386752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:37.897 [2024-10-08 09:27:29.386759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.897 [2024-10-08 09:27:29.386795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.897 [2024-10-08 09:27:29.386802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.897 [2024-10-08 09:27:29.386809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:37.897 [2024-10-08 09:27:29.386816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.897 [2024-10-08 09:27:29.386829] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:37.897 [2024-10-08 09:27:29.387374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:37.897 [2024-10-08 09:27:29.387410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.897 [2024-10-08 09:27:29.387419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.897 [2024-10-08 09:27:29.387426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:21:37.897 [2024-10-08 09:27:29.387432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.897 [2024-10-08 09:27:29.388662] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:37.897 [2024-10-08 09:27:29.399326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.897 [2024-10-08 09:27:29.399351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:37.897 [2024-10-08 09:27:29.399360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.665 ms 00:21:37.897 [2024-10-08 09:27:29.399367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.897 [2024-10-08 09:27:29.399434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.897 [2024-10-08 09:27:29.399443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:37.897 [2024-10-08 09:27:29.399453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:37.897 [2024-10-08 09:27:29.399459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.897 [2024-10-08 09:27:29.405657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.897 [2024-10-08 09:27:29.405677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.897 [2024-10-08 09:27:29.405685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:21:37.897 [2024-10-08 09:27:29.405691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.897 [2024-10-08 09:27:29.405751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.897 [2024-10-08 09:27:29.405758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.897 [2024-10-08 09:27:29.405764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:37.897 [2024-10-08 09:27:29.405770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.897 [2024-10-08 09:27:29.405801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.897 [2024-10-08 09:27:29.405808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:37.897 [2024-10-08 09:27:29.405815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:37.897 [2024-10-08 09:27:29.405821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.897 [2024-10-08 09:27:29.405838] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:37.897 [2024-10-08 09:27:29.408888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.897 [2024-10-08 09:27:29.408907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.897 [2024-10-08 09:27:29.408915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.055 ms 00:21:37.897 [2024-10-08 09:27:29.408921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.897 [2024-10-08 09:27:29.408953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.897 [2024-10-08 09:27:29.408960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:37.897 [2024-10-08 09:27:29.408967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:37.897 [2024-10-08 09:27:29.408973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.897 [2024-10-08 09:27:29.408987] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:37.897 [2024-10-08 09:27:29.409003] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:37.897 [2024-10-08 09:27:29.409032] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:37.897 [2024-10-08 09:27:29.409046] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:37.897 [2024-10-08 09:27:29.409129] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:37.897 [2024-10-08 09:27:29.409139] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:37.897 [2024-10-08 09:27:29.409147] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:37.897 [2024-10-08 09:27:29.409155] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:37.897 [2024-10-08 09:27:29.409162] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:37.897 [2024-10-08 09:27:29.409168] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:37.897 [2024-10-08 09:27:29.409175] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:37.897 [2024-10-08 09:27:29.409181] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:37.897 [2024-10-08 09:27:29.409186] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:37.897 [2024-10-08 09:27:29.409193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.897 [2024-10-08 09:27:29.409200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:37.897 [2024-10-08 09:27:29.409206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:21:37.897 [2024-10-08 09:27:29.409212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.897 [2024-10-08 09:27:29.409275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.897 [2024-10-08 09:27:29.409282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:37.897 [2024-10-08 09:27:29.409288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:37.897 [2024-10-08 09:27:29.409293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.897 [2024-10-08 09:27:29.409369] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:37.897 [2024-10-08 09:27:29.409377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:37.897 [2024-10-08 09:27:29.409386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.897 [2024-10-08 09:27:29.409402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.897 [2024-10-08 09:27:29.409408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:37.897 [2024-10-08 09:27:29.409414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:37.897 [2024-10-08 09:27:29.409419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:37.897 [2024-10-08 09:27:29.409426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:37.897 [2024-10-08 09:27:29.409431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:37.897 [2024-10-08 09:27:29.409441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.897 [2024-10-08 09:27:29.409447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:37.898 [2024-10-08 09:27:29.409453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:37.898 [2024-10-08 09:27:29.409459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.898 [2024-10-08 09:27:29.409464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:37.898 [2024-10-08 09:27:29.409469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:37.898 [2024-10-08 09:27:29.409474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.898 [2024-10-08 09:27:29.409479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:37.898 [2024-10-08 09:27:29.409484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:37.898 [2024-10-08 09:27:29.409490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.898 [2024-10-08 09:27:29.409495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:37.898 [2024-10-08 09:27:29.409501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:37.898 [2024-10-08 09:27:29.409506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.898 [2024-10-08 09:27:29.409510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:37.898 [2024-10-08 09:27:29.409515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:37.898 [2024-10-08 09:27:29.409520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.898 [2024-10-08 09:27:29.409526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:37.898 [2024-10-08 09:27:29.409531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:37.898 [2024-10-08 09:27:29.409535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.898 [2024-10-08 09:27:29.409540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:37.898 [2024-10-08 09:27:29.409545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:37.898 [2024-10-08 09:27:29.409550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.898 [2024-10-08 09:27:29.409555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:37.898 [2024-10-08 09:27:29.409560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:37.898 [2024-10-08 09:27:29.409566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.898 [2024-10-08 09:27:29.409572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:37.898 [2024-10-08 09:27:29.409576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:37.898 [2024-10-08 09:27:29.409581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.898 [2024-10-08 09:27:29.409586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:37.898 [2024-10-08 09:27:29.409592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:37.898 [2024-10-08 09:27:29.409597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.898 [2024-10-08 09:27:29.409602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:37.898 [2024-10-08 09:27:29.409607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:37.898 [2024-10-08 09:27:29.409612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.898 [2024-10-08 09:27:29.409618] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:37.898 [2024-10-08 09:27:29.409625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:37.898 [2024-10-08 09:27:29.409631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.898 [2024-10-08 09:27:29.409637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.898 [2024-10-08 09:27:29.409642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:37.898 [2024-10-08 09:27:29.409648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:37.898 [2024-10-08 09:27:29.409652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:37.898 [2024-10-08 09:27:29.409658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:37.898 [2024-10-08 09:27:29.409662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:37.898 [2024-10-08 09:27:29.409667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:37.898 [2024-10-08 09:27:29.409674] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:37.898 [2024-10-08 09:27:29.409681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.898 [2024-10-08 09:27:29.409688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:37.898 [2024-10-08 09:27:29.409694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:37.898 [2024-10-08 09:27:29.409700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:37.898 [2024-10-08 09:27:29.409705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:37.898 [2024-10-08 09:27:29.409710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:37.898 [2024-10-08 09:27:29.409716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:37.898 [2024-10-08 09:27:29.409721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:37.898 [2024-10-08 09:27:29.409727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:37.898 [2024-10-08 09:27:29.409732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:37.898 [2024-10-08 09:27:29.409738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:37.898 [2024-10-08 09:27:29.409744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:37.898 [2024-10-08 09:27:29.409749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:37.898 [2024-10-08 09:27:29.409755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:37.898 [2024-10-08 09:27:29.409761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:37.898 [2024-10-08 09:27:29.409766] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:37.898 [2024-10-08 09:27:29.409773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.898 [2024-10-08 09:27:29.409781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:37.898 [2024-10-08 09:27:29.409787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:37.898 [2024-10-08 09:27:29.409793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:37.898 [2024-10-08 09:27:29.409798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:37.898 [2024-10-08 09:27:29.409805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.898 [2024-10-08 09:27:29.409812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:37.898 [2024-10-08 09:27:29.409817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:21:37.898 [2024-10-08 09:27:29.409823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.898 [2024-10-08 09:27:29.455861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.898 [2024-10-08 09:27:29.455891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.898 [2024-10-08 09:27:29.455901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.989 ms 00:21:37.898 [2024-10-08 09:27:29.455908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.898 [2024-10-08 09:27:29.455984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.898 [2024-10-08 09:27:29.455991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:37.898 [2024-10-08 09:27:29.455998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:37.898 [2024-10-08 09:27:29.456004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.898 [2024-10-08 09:27:29.482482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.898 [2024-10-08 09:27:29.482506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.898 [2024-10-08 09:27:29.482514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.432 ms 00:21:37.898 [2024-10-08 09:27:29.482521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.898 [2024-10-08 09:27:29.482548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.898 [2024-10-08 09:27:29.482555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.898 [2024-10-08 09:27:29.482562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:37.898 [2024-10-08 09:27:29.482568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.898 [2024-10-08 09:27:29.482983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.898 [2024-10-08 09:27:29.483048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.898 [2024-10-08 09:27:29.483056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:21:37.898 [2024-10-08 09:27:29.483063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.898 [2024-10-08 09:27:29.483180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.898 [2024-10-08 09:27:29.483192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.898 [2024-10-08 09:27:29.483199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:37.898 [2024-10-08 09:27:29.483205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.898 [2024-10-08 09:27:29.494285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.898 [2024-10-08 09:27:29.494305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.898 [2024-10-08 09:27:29.494313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.063 ms 00:21:37.898 [2024-10-08 09:27:29.494319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.898 [2024-10-08 09:27:29.505039] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:37.898 [2024-10-08 09:27:29.505063] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:37.898 [2024-10-08 09:27:29.505073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.898 [2024-10-08 09:27:29.505080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:37.898 [2024-10-08 09:27:29.505087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.667 ms 00:21:37.898 [2024-10-08 09:27:29.505094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.898 [2024-10-08 09:27:29.523719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.898 [2024-10-08 09:27:29.523748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:37.898 [2024-10-08 09:27:29.523759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.595 ms 00:21:37.898 [2024-10-08 09:27:29.523767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.898 [2024-10-08 09:27:29.533157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.899 [2024-10-08 09:27:29.533179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:37.899 [2024-10-08 09:27:29.533186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.359 ms 00:21:37.899 [2024-10-08 09:27:29.533192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.899 [2024-10-08 09:27:29.541899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.899 [2024-10-08 09:27:29.541920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:37.899 [2024-10-08 09:27:29.541927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.681 ms 00:21:37.899 [2024-10-08 09:27:29.541933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.899 [2024-10-08 09:27:29.542412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.899 [2024-10-08 09:27:29.542429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:37.899 [2024-10-08 09:27:29.542436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:21:37.899 [2024-10-08 09:27:29.542442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.157 [2024-10-08 09:27:29.590855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.157 [2024-10-08 09:27:29.590888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:38.157 [2024-10-08 09:27:29.590898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.399 ms 00:21:38.157 [2024-10-08 09:27:29.590904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.157 [2024-10-08 09:27:29.599201] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:38.157 [2024-10-08 09:27:29.601323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.157 [2024-10-08 09:27:29.601344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:38.157 [2024-10-08 09:27:29.601353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.382 ms 00:21:38.157 [2024-10-08 09:27:29.601360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.157 [2024-10-08 09:27:29.601436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.157 [2024-10-08 09:27:29.601444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:38.157 [2024-10-08 09:27:29.601451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:38.157 [2024-10-08 09:27:29.601458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.157 [2024-10-08 09:27:29.601518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.157 [2024-10-08 09:27:29.601530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:38.157 [2024-10-08 09:27:29.601537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:38.157 [2024-10-08 09:27:29.601544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.157 [2024-10-08 09:27:29.601561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.157 [2024-10-08 09:27:29.601568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:38.157 [2024-10-08 09:27:29.601574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:38.157 [2024-10-08 09:27:29.601580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.158 [2024-10-08 09:27:29.601608] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:38.158 [2024-10-08 09:27:29.601619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.158 [2024-10-08 09:27:29.601625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:38.158 [2024-10-08 09:27:29.601633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:38.158 [2024-10-08 09:27:29.601640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.158 [2024-10-08 09:27:29.620136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.158 [2024-10-08 09:27:29.620161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:38.158 [2024-10-08 09:27:29.620170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.482 ms 00:21:38.158 [2024-10-08 09:27:29.620177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.158 [2024-10-08 09:27:29.620237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.158 [2024-10-08 09:27:29.620245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:38.158 [2024-10-08 09:27:29.620252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:38.158 [2024-10-08 09:27:29.620259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.158 [2024-10-08 09:27:29.621115] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 234.028 ms, result 0 00:21:39.092  [2024-10-08T09:27:31.708Z] Copying: 12/1024 [MB] (12 MBps) [2024-10-08T09:27:32.641Z] Copying: 37/1024 [MB] (24 MBps) [2024-10-08T09:27:34.016Z] Copying: 52/1024 [MB] (15 MBps) [2024-10-08T09:27:34.950Z] Copying: 64/1024 [MB] (12 MBps) [2024-10-08T09:27:35.921Z] Copying: 77/1024 [MB] (12 MBps) [2024-10-08T09:27:36.854Z] Copying: 90/1024 [MB] (12 MBps) [2024-10-08T09:27:37.789Z] Copying: 104/1024 [MB] (14 MBps) [2024-10-08T09:27:38.722Z] Copying: 116/1024 [MB] (12 MBps) [2024-10-08T09:27:39.655Z] Copying: 129/1024 [MB] (12 MBps) [2024-10-08T09:27:41.039Z] Copying: 141/1024 [MB] (12 MBps) [2024-10-08T09:27:41.972Z] Copying: 152/1024 [MB] (10 MBps) [2024-10-08T09:27:42.906Z] Copying: 165/1024 [MB] (12 MBps) [2024-10-08T09:27:43.839Z] Copying: 183/1024 [MB] (18 MBps) [2024-10-08T09:27:44.771Z] Copying: 196/1024 [MB] (12 MBps) [2024-10-08T09:27:45.706Z] Copying: 209/1024 [MB] (12 MBps) [2024-10-08T09:27:46.650Z] Copying: 221/1024 [MB] (12 MBps) [2024-10-08T09:27:48.027Z] Copying: 231/1024 [MB] (10 MBps) [2024-10-08T09:27:48.967Z] Copying: 243/1024 [MB] (12 MBps) [2024-10-08T09:27:49.900Z] Copying: 254/1024 [MB] (10 MBps) [2024-10-08T09:27:50.833Z] Copying: 266/1024 [MB] (11 MBps) [2024-10-08T09:27:51.768Z] Copying: 278/1024 [MB] (12 MBps) [2024-10-08T09:27:52.702Z] Copying: 290/1024 [MB] (12 MBps) [2024-10-08T09:27:53.691Z] Copying: 302/1024 [MB] (11 MBps) [2024-10-08T09:27:54.632Z] Copying: 314/1024 [MB] (12 MBps) [2024-10-08T09:27:56.014Z] Copying: 328/1024 [MB] (14 MBps) [2024-10-08T09:27:56.948Z] Copying: 339/1024 [MB] (10 MBps) [2024-10-08T09:27:57.888Z] Copying: 351/1024 [MB] (12 MBps) [2024-10-08T09:27:58.821Z] Copying: 362/1024 [MB] (11 MBps) [2024-10-08T09:27:59.755Z] Copying: 374/1024 [MB] (11 MBps) [2024-10-08T09:28:00.688Z] Copying: 387/1024 [MB] (12 MBps) [2024-10-08T09:28:02.067Z] Copying: 398/1024 [MB] (11 MBps) [2024-10-08T09:28:02.633Z] Copying: 410/1024 [MB] (11 MBps) [2024-10-08T09:28:04.008Z] Copying: 420/1024 [MB] (10 MBps) [2024-10-08T09:28:04.947Z] Copying: 432/1024 [MB] (11 MBps) [2024-10-08T09:28:05.884Z] Copying: 443/1024 [MB] (10 MBps) [2024-10-08T09:28:06.820Z] Copying: 455/1024 [MB] (11 MBps) [2024-10-08T09:28:07.760Z] Copying: 467/1024 [MB] (12 MBps) [2024-10-08T09:28:08.694Z] Copying: 478/1024 [MB] (10 MBps) [2024-10-08T09:28:09.633Z] Copying: 490/1024 [MB] (12 MBps) [2024-10-08T09:28:10.643Z] Copying: 502/1024 [MB] (11 MBps) [2024-10-08T09:28:12.017Z] Copying: 513/1024 [MB] (11 MBps) [2024-10-08T09:28:12.953Z] Copying: 526/1024 [MB] (12 MBps) [2024-10-08T09:28:13.896Z] Copying: 539/1024 [MB] (13 MBps) [2024-10-08T09:28:14.833Z] Copying: 550/1024 [MB] (11 MBps) [2024-10-08T09:28:15.767Z] Copying: 560/1024 [MB] (10 MBps) [2024-10-08T09:28:16.703Z] Copying: 574/1024 [MB] (13 MBps) [2024-10-08T09:28:17.646Z] Copying: 587/1024 [MB] (13 MBps) [2024-10-08T09:28:19.021Z] Copying: 597/1024 [MB] (10 MBps) [2024-10-08T09:28:19.955Z] Copying: 609/1024 [MB] (11 MBps) [2024-10-08T09:28:20.891Z] Copying: 622/1024 [MB] (12 MBps) [2024-10-08T09:28:21.836Z] Copying: 635/1024 [MB] (12 MBps) [2024-10-08T09:28:22.770Z] Copying: 645/1024 [MB] (10 MBps) [2024-10-08T09:28:23.702Z] Copying: 658/1024 [MB] (12 MBps) [2024-10-08T09:28:24.636Z] Copying: 671/1024 [MB] (13 MBps) [2024-10-08T09:28:26.020Z] Copying: 685/1024 [MB] (13 MBps) [2024-10-08T09:28:26.954Z] Copying: 696/1024 [MB] (10 MBps) [2024-10-08T09:28:27.941Z] Copying: 708/1024 [MB] (11 MBps) [2024-10-08T09:28:28.887Z] Copying: 721/1024 [MB] (13 MBps) [2024-10-08T09:28:29.827Z] Copying: 732/1024 [MB] (11 MBps) [2024-10-08T09:28:30.770Z] Copying: 744/1024 [MB] (11 MBps) [2024-10-08T09:28:31.713Z] Copying: 754/1024 [MB] (10 MBps) [2024-10-08T09:28:32.657Z] Copying: 782904/1048576 [kB] (10092 kBps) [2024-10-08T09:28:34.037Z] Copying: 774/1024 [MB] (10 MBps) [2024-10-08T09:28:34.979Z] Copying: 786/1024 [MB] (11 MBps) [2024-10-08T09:28:35.913Z] Copying: 796/1024 [MB] (10 MBps) [2024-10-08T09:28:36.846Z] Copying: 807/1024 [MB] (11 MBps) [2024-10-08T09:28:37.780Z] Copying: 819/1024 [MB] (11 MBps) [2024-10-08T09:28:38.712Z] Copying: 831/1024 [MB] (11 MBps) [2024-10-08T09:28:39.646Z] Copying: 843/1024 [MB] (12 MBps) [2024-10-08T09:28:41.019Z] Copying: 855/1024 [MB] (11 MBps) [2024-10-08T09:28:41.958Z] Copying: 866/1024 [MB] (11 MBps) [2024-10-08T09:28:42.892Z] Copying: 877/1024 [MB] (11 MBps) [2024-10-08T09:28:43.824Z] Copying: 889/1024 [MB] (11 MBps) [2024-10-08T09:28:44.811Z] Copying: 900/1024 [MB] (11 MBps) [2024-10-08T09:28:45.761Z] Copying: 912/1024 [MB] (11 MBps) [2024-10-08T09:28:46.700Z] Copying: 923/1024 [MB] (10 MBps) [2024-10-08T09:28:47.637Z] Copying: 934/1024 [MB] (10 MBps) [2024-10-08T09:28:49.015Z] Copying: 944/1024 [MB] (10 MBps) [2024-10-08T09:28:49.949Z] Copying: 955/1024 [MB] (11 MBps) [2024-10-08T09:28:50.884Z] Copying: 967/1024 [MB] (11 MBps) [2024-10-08T09:28:51.828Z] Copying: 983/1024 [MB] (16 MBps) [2024-10-08T09:28:52.765Z] Copying: 993/1024 [MB] (10 MBps) [2024-10-08T09:28:53.699Z] Copying: 1004/1024 [MB] (10 MBps) [2024-10-08T09:28:54.638Z] Copying: 1015/1024 [MB] (11 MBps) [2024-10-08T09:28:55.208Z] Copying: 1047996/1048576 [kB] (7804 kBps) [2024-10-08T09:28:55.208Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-10-08 09:28:55.055327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.525 [2024-10-08 09:28:55.055665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:03.525 [2024-10-08 09:28:55.055690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:03.525 [2024-10-08 09:28:55.055700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.525 [2024-10-08 09:28:55.059350] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:03.525 [2024-10-08 09:28:55.063012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.525 [2024-10-08 09:28:55.063141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:03.525 [2024-10-08 09:28:55.063159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.482 ms 00:23:03.525 [2024-10-08 09:28:55.063167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.525 [2024-10-08 09:28:55.074947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.525 [2024-10-08 09:28:55.074989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:03.525 [2024-10-08 09:28:55.075003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.973 ms 00:23:03.525 [2024-10-08 09:28:55.075011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.525 [2024-10-08 09:28:55.098083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.525 [2024-10-08 09:28:55.098236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:03.525 [2024-10-08 09:28:55.098254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.057 ms 00:23:03.525 [2024-10-08 09:28:55.098263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.525 [2024-10-08 09:28:55.104415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.525 [2024-10-08 09:28:55.104524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:03.525 [2024-10-08 09:28:55.104539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.124 ms 00:23:03.525 [2024-10-08 09:28:55.104547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.525 [2024-10-08 09:28:55.129833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.525 [2024-10-08 09:28:55.129865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:03.525 [2024-10-08 09:28:55.129876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.230 ms 00:23:03.525 [2024-10-08 09:28:55.129884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.525 [2024-10-08 09:28:55.144228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.525 [2024-10-08 09:28:55.144356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:03.525 [2024-10-08 09:28:55.144372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.311 ms 00:23:03.525 [2024-10-08 09:28:55.144386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.787 [2024-10-08 09:28:55.383582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.787 [2024-10-08 09:28:55.383770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:03.787 [2024-10-08 09:28:55.383851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 238.941 ms 00:23:03.787 [2024-10-08 09:28:55.383879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.787 [2024-10-08 09:28:55.410654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.787 [2024-10-08 09:28:55.410843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:03.787 [2024-10-08 09:28:55.410975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.739 ms 00:23:03.787 [2024-10-08 09:28:55.411002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.787 [2024-10-08 09:28:55.436913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.787 [2024-10-08 09:28:55.437102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:03.787 [2024-10-08 09:28:55.437175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.861 ms 00:23:03.787 [2024-10-08 09:28:55.437200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.787 [2024-10-08 09:28:55.462463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.787 [2024-10-08 09:28:55.462637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:03.787 [2024-10-08 09:28:55.462711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.215 ms 00:23:03.787 [2024-10-08 09:28:55.462735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.048 [2024-10-08 09:28:55.487909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.048 [2024-10-08 09:28:55.488081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:04.048 [2024-10-08 09:28:55.488152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.084 ms 00:23:04.048 [2024-10-08 09:28:55.488175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.048 [2024-10-08 09:28:55.488221] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:04.048 [2024-10-08 09:28:55.488252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 90624 / 261120 wr_cnt: 1 state: open 00:23:04.048 [2024-10-08 09:28:55.488289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.488975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.489982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.490995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:04.048 [2024-10-08 09:28:55.491766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.491799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.491860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.491890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.491920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.491948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.491979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.492042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.492076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.492106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.492260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.492272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.492280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.492288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.492296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.492304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.492312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:04.049 [2024-10-08 09:28:55.492330] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:04.049 [2024-10-08 09:28:55.492340] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 54e93b64-5b47-437d-a677-89097ad5eeb3 00:23:04.049 [2024-10-08 09:28:55.492349] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 90624 00:23:04.049 [2024-10-08 09:28:55.492358] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 91584 00:23:04.049 [2024-10-08 09:28:55.492366] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 90624 00:23:04.049 [2024-10-08 09:28:55.492375] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0106 00:23:04.049 [2024-10-08 09:28:55.492383] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:04.049 [2024-10-08 09:28:55.492482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:04.049 [2024-10-08 09:28:55.492518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:04.049 [2024-10-08 09:28:55.492540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:04.049 [2024-10-08 09:28:55.492558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:04.049 [2024-10-08 09:28:55.492578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.049 [2024-10-08 09:28:55.492602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:04.049 [2024-10-08 09:28:55.492732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.358 ms 00:23:04.049 [2024-10-08 09:28:55.492742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.507200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.049 [2024-10-08 09:28:55.507245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:04.049 [2024-10-08 09:28:55.507257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.415 ms 00:23:04.049 [2024-10-08 09:28:55.507266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.507762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.049 [2024-10-08 09:28:55.507785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:04.049 [2024-10-08 09:28:55.507795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:23:04.049 [2024-10-08 09:28:55.507803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.542090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.049 [2024-10-08 09:28:55.542144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:04.049 [2024-10-08 09:28:55.542155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.049 [2024-10-08 09:28:55.542171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.542239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.049 [2024-10-08 09:28:55.542249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:04.049 [2024-10-08 09:28:55.542258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.049 [2024-10-08 09:28:55.542266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.542355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.049 [2024-10-08 09:28:55.542369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:04.049 [2024-10-08 09:28:55.542379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.049 [2024-10-08 09:28:55.542420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.542442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.049 [2024-10-08 09:28:55.542452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:04.049 [2024-10-08 09:28:55.542460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.049 [2024-10-08 09:28:55.542468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.633682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.049 [2024-10-08 09:28:55.633971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:04.049 [2024-10-08 09:28:55.633997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.049 [2024-10-08 09:28:55.634008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.706160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.049 [2024-10-08 09:28:55.706224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:04.049 [2024-10-08 09:28:55.706236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.049 [2024-10-08 09:28:55.706244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.706319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.049 [2024-10-08 09:28:55.706329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:04.049 [2024-10-08 09:28:55.706336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.049 [2024-10-08 09:28:55.706345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.706452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.049 [2024-10-08 09:28:55.706471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:04.049 [2024-10-08 09:28:55.706478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.049 [2024-10-08 09:28:55.706486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.706587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.049 [2024-10-08 09:28:55.706599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:04.049 [2024-10-08 09:28:55.706607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.049 [2024-10-08 09:28:55.706614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.706644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.049 [2024-10-08 09:28:55.706653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:04.049 [2024-10-08 09:28:55.706664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.049 [2024-10-08 09:28:55.706671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.706722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.049 [2024-10-08 09:28:55.706733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:04.049 [2024-10-08 09:28:55.706740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.049 [2024-10-08 09:28:55.706748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.706805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.049 [2024-10-08 09:28:55.706819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:04.049 [2024-10-08 09:28:55.706826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.049 [2024-10-08 09:28:55.706835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.049 [2024-10-08 09:28:55.706978] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 654.121 ms, result 0 00:23:05.428 00:23:05.428 00:23:05.428 09:28:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:07.980 09:28:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:07.980 [2024-10-08 09:28:59.166732] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:23:07.980 [2024-10-08 09:28:59.166823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78007 ] 00:23:07.980 [2024-10-08 09:28:59.311852] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.980 [2024-10-08 09:28:59.549651] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.241 [2024-10-08 09:28:59.884100] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:08.241 [2024-10-08 09:28:59.884195] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:08.503 [2024-10-08 09:29:00.050345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.503 [2024-10-08 09:29:00.050437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:08.503 [2024-10-08 09:29:00.050456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:08.503 [2024-10-08 09:29:00.050466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.503 [2024-10-08 09:29:00.050533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.503 [2024-10-08 09:29:00.050569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:08.503 [2024-10-08 09:29:00.050579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:08.503 [2024-10-08 09:29:00.050587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.503 [2024-10-08 09:29:00.050612] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:08.503 [2024-10-08 09:29:00.051637] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:08.503 [2024-10-08 09:29:00.051691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.503 [2024-10-08 09:29:00.051701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:08.503 [2024-10-08 09:29:00.051713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:23:08.503 [2024-10-08 09:29:00.051721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.503 [2024-10-08 09:29:00.054105] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:08.503 [2024-10-08 09:29:00.069955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.503 [2024-10-08 09:29:00.070034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:08.503 [2024-10-08 09:29:00.070049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.852 ms 00:23:08.503 [2024-10-08 09:29:00.070058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.503 [2024-10-08 09:29:00.070145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.503 [2024-10-08 09:29:00.070157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:08.503 [2024-10-08 09:29:00.070168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:08.503 [2024-10-08 09:29:00.070176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.503 [2024-10-08 09:29:00.082004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.503 [2024-10-08 09:29:00.082055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:08.503 [2024-10-08 09:29:00.082067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.745 ms 00:23:08.503 [2024-10-08 09:29:00.082076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.503 [2024-10-08 09:29:00.082164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.503 [2024-10-08 09:29:00.082175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:08.503 [2024-10-08 09:29:00.082185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:08.503 [2024-10-08 09:29:00.082194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.503 [2024-10-08 09:29:00.082263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.503 [2024-10-08 09:29:00.082275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:08.503 [2024-10-08 09:29:00.082284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:08.503 [2024-10-08 09:29:00.082293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.503 [2024-10-08 09:29:00.082320] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:08.503 [2024-10-08 09:29:00.087016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.503 [2024-10-08 09:29:00.087060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:08.503 [2024-10-08 09:29:00.087072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.704 ms 00:23:08.503 [2024-10-08 09:29:00.087081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.503 [2024-10-08 09:29:00.087122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.503 [2024-10-08 09:29:00.087131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:08.503 [2024-10-08 09:29:00.087140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:08.504 [2024-10-08 09:29:00.087150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.504 [2024-10-08 09:29:00.087195] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:08.504 [2024-10-08 09:29:00.087223] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:08.504 [2024-10-08 09:29:00.087264] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:08.504 [2024-10-08 09:29:00.087283] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:08.504 [2024-10-08 09:29:00.087450] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:08.504 [2024-10-08 09:29:00.087465] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:08.504 [2024-10-08 09:29:00.087477] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:08.504 [2024-10-08 09:29:00.087493] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:08.504 [2024-10-08 09:29:00.087503] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:08.504 [2024-10-08 09:29:00.087513] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:08.504 [2024-10-08 09:29:00.087522] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:08.504 [2024-10-08 09:29:00.087530] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:08.504 [2024-10-08 09:29:00.087540] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:08.504 [2024-10-08 09:29:00.087551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.504 [2024-10-08 09:29:00.087560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:08.504 [2024-10-08 09:29:00.087569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:23:08.504 [2024-10-08 09:29:00.087577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.504 [2024-10-08 09:29:00.087665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.504 [2024-10-08 09:29:00.087679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:08.504 [2024-10-08 09:29:00.087689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:08.504 [2024-10-08 09:29:00.087697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.504 [2024-10-08 09:29:00.087806] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:08.504 [2024-10-08 09:29:00.087820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:08.504 [2024-10-08 09:29:00.087829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:08.504 [2024-10-08 09:29:00.087838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.504 [2024-10-08 09:29:00.087847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:08.504 [2024-10-08 09:29:00.087855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:08.504 [2024-10-08 09:29:00.087863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:08.504 [2024-10-08 09:29:00.087872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:08.504 [2024-10-08 09:29:00.087880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:08.504 [2024-10-08 09:29:00.087887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:08.504 [2024-10-08 09:29:00.087894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:08.504 [2024-10-08 09:29:00.087902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:08.504 [2024-10-08 09:29:00.087911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:08.504 [2024-10-08 09:29:00.087928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:08.504 [2024-10-08 09:29:00.087936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:08.504 [2024-10-08 09:29:00.087943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.504 [2024-10-08 09:29:00.087951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:08.504 [2024-10-08 09:29:00.087958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:08.504 [2024-10-08 09:29:00.087965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.504 [2024-10-08 09:29:00.087973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:08.504 [2024-10-08 09:29:00.087980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:08.504 [2024-10-08 09:29:00.087988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:08.504 [2024-10-08 09:29:00.087996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:08.504 [2024-10-08 09:29:00.088004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:08.504 [2024-10-08 09:29:00.088012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:08.504 [2024-10-08 09:29:00.088020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:08.504 [2024-10-08 09:29:00.088027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:08.504 [2024-10-08 09:29:00.088035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:08.504 [2024-10-08 09:29:00.088042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:08.504 [2024-10-08 09:29:00.088049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:08.504 [2024-10-08 09:29:00.088060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:08.504 [2024-10-08 09:29:00.088069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:08.504 [2024-10-08 09:29:00.088075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:08.504 [2024-10-08 09:29:00.088083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:08.504 [2024-10-08 09:29:00.088089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:08.504 [2024-10-08 09:29:00.088097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:08.504 [2024-10-08 09:29:00.088103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:08.504 [2024-10-08 09:29:00.088110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:08.504 [2024-10-08 09:29:00.088117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:08.504 [2024-10-08 09:29:00.088124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.504 [2024-10-08 09:29:00.088131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:08.504 [2024-10-08 09:29:00.088139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:08.504 [2024-10-08 09:29:00.088147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.504 [2024-10-08 09:29:00.088154] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:08.504 [2024-10-08 09:29:00.088166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:08.504 [2024-10-08 09:29:00.088179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:08.504 [2024-10-08 09:29:00.088190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:08.504 [2024-10-08 09:29:00.088202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:08.504 [2024-10-08 09:29:00.088210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:08.504 [2024-10-08 09:29:00.088218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:08.504 [2024-10-08 09:29:00.088226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:08.504 [2024-10-08 09:29:00.088233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:08.504 [2024-10-08 09:29:00.088240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:08.504 [2024-10-08 09:29:00.088250] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:08.504 [2024-10-08 09:29:00.088261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:08.504 [2024-10-08 09:29:00.088270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:08.504 [2024-10-08 09:29:00.088281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:08.504 [2024-10-08 09:29:00.088289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:08.504 [2024-10-08 09:29:00.088297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:08.504 [2024-10-08 09:29:00.088305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:08.504 [2024-10-08 09:29:00.088313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:08.504 [2024-10-08 09:29:00.088321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:08.504 [2024-10-08 09:29:00.088330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:08.504 [2024-10-08 09:29:00.088337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:08.504 [2024-10-08 09:29:00.088346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:08.504 [2024-10-08 09:29:00.088354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:08.504 [2024-10-08 09:29:00.088362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:08.504 [2024-10-08 09:29:00.088370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:08.504 [2024-10-08 09:29:00.088378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:08.504 [2024-10-08 09:29:00.088386] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:08.504 [2024-10-08 09:29:00.088426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:08.504 [2024-10-08 09:29:00.088435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:08.504 [2024-10-08 09:29:00.088444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:08.504 [2024-10-08 09:29:00.088452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:08.504 [2024-10-08 09:29:00.088461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:08.504 [2024-10-08 09:29:00.088471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.504 [2024-10-08 09:29:00.088481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:08.504 [2024-10-08 09:29:00.088491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:23:08.504 [2024-10-08 09:29:00.088500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.504 [2024-10-08 09:29:00.136520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.505 [2024-10-08 09:29:00.136580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:08.505 [2024-10-08 09:29:00.136595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.963 ms 00:23:08.505 [2024-10-08 09:29:00.136604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.505 [2024-10-08 09:29:00.136719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.505 [2024-10-08 09:29:00.136730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:08.505 [2024-10-08 09:29:00.136743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:08.505 [2024-10-08 09:29:00.136752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.505 [2024-10-08 09:29:00.176614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.505 [2024-10-08 09:29:00.176664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:08.505 [2024-10-08 09:29:00.176682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.789 ms 00:23:08.505 [2024-10-08 09:29:00.176691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.505 [2024-10-08 09:29:00.176737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.505 [2024-10-08 09:29:00.176746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:08.505 [2024-10-08 09:29:00.176757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:08.505 [2024-10-08 09:29:00.176766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.505 [2024-10-08 09:29:00.177582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.505 [2024-10-08 09:29:00.177631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:08.505 [2024-10-08 09:29:00.177644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:23:08.505 [2024-10-08 09:29:00.177665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.505 [2024-10-08 09:29:00.177858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.505 [2024-10-08 09:29:00.177872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:08.505 [2024-10-08 09:29:00.177883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:23:08.505 [2024-10-08 09:29:00.177893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.766 [2024-10-08 09:29:00.194944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.766 [2024-10-08 09:29:00.194989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:08.766 [2024-10-08 09:29:00.195001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.026 ms 00:23:08.766 [2024-10-08 09:29:00.195010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.766 [2024-10-08 09:29:00.210545] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:08.766 [2024-10-08 09:29:00.210826] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:08.766 [2024-10-08 09:29:00.210849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.766 [2024-10-08 09:29:00.210860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:08.766 [2024-10-08 09:29:00.210870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.716 ms 00:23:08.766 [2024-10-08 09:29:00.210879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.766 [2024-10-08 09:29:00.237498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.766 [2024-10-08 09:29:00.237707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:08.766 [2024-10-08 09:29:00.237732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.566 ms 00:23:08.766 [2024-10-08 09:29:00.237742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.766 [2024-10-08 09:29:00.251109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.766 [2024-10-08 09:29:00.251163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:08.766 [2024-10-08 09:29:00.251176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.314 ms 00:23:08.766 [2024-10-08 09:29:00.251184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.766 [2024-10-08 09:29:00.264428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.766 [2024-10-08 09:29:00.264628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:08.766 [2024-10-08 09:29:00.264652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.185 ms 00:23:08.766 [2024-10-08 09:29:00.264661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.766 [2024-10-08 09:29:00.265335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.766 [2024-10-08 09:29:00.265365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:08.767 [2024-10-08 09:29:00.265378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:23:08.767 [2024-10-08 09:29:00.265410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.767 [2024-10-08 09:29:00.339981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.767 [2024-10-08 09:29:00.340204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:08.767 [2024-10-08 09:29:00.340230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.548 ms 00:23:08.767 [2024-10-08 09:29:00.340241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.767 [2024-10-08 09:29:00.352483] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:08.767 [2024-10-08 09:29:00.356619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.767 [2024-10-08 09:29:00.356665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:08.767 [2024-10-08 09:29:00.356678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.194 ms 00:23:08.767 [2024-10-08 09:29:00.356696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.767 [2024-10-08 09:29:00.356784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.767 [2024-10-08 09:29:00.356796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:08.767 [2024-10-08 09:29:00.356808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:08.767 [2024-10-08 09:29:00.356817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.767 [2024-10-08 09:29:00.358950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.767 [2024-10-08 09:29:00.359153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:08.767 [2024-10-08 09:29:00.359176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:23:08.767 [2024-10-08 09:29:00.359187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.767 [2024-10-08 09:29:00.359232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.767 [2024-10-08 09:29:00.359243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:08.767 [2024-10-08 09:29:00.359253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:08.767 [2024-10-08 09:29:00.359262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.767 [2024-10-08 09:29:00.359310] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:08.767 [2024-10-08 09:29:00.359322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.767 [2024-10-08 09:29:00.359334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:08.767 [2024-10-08 09:29:00.359345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:08.767 [2024-10-08 09:29:00.359360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.767 [2024-10-08 09:29:00.385969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.767 [2024-10-08 09:29:00.386188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:08.767 [2024-10-08 09:29:00.386213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.587 ms 00:23:08.767 [2024-10-08 09:29:00.386223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.767 [2024-10-08 09:29:00.386987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.767 [2024-10-08 09:29:00.387039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:08.767 [2024-10-08 09:29:00.387054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:08.767 [2024-10-08 09:29:00.387064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.767 [2024-10-08 09:29:00.388721] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 337.758 ms, result 0 00:23:10.150  [2024-10-08T09:29:02.815Z] Copying: 1204/1048576 [kB] (1204 kBps) [2024-10-08T09:29:03.758Z] Copying: 4616/1048576 [kB] (3412 kBps) [2024-10-08T09:29:04.700Z] Copying: 20/1024 [MB] (15 MBps) [2024-10-08T09:29:05.639Z] Copying: 35/1024 [MB] (15 MBps) [2024-10-08T09:29:06.580Z] Copying: 53/1024 [MB] (17 MBps) [2024-10-08T09:29:07.963Z] Copying: 70/1024 [MB] (17 MBps) [2024-10-08T09:29:08.905Z] Copying: 86/1024 [MB] (16 MBps) [2024-10-08T09:29:09.846Z] Copying: 109/1024 [MB] (22 MBps) [2024-10-08T09:29:10.789Z] Copying: 124/1024 [MB] (15 MBps) [2024-10-08T09:29:11.732Z] Copying: 144/1024 [MB] (19 MBps) [2024-10-08T09:29:12.674Z] Copying: 160/1024 [MB] (15 MBps) [2024-10-08T09:29:13.616Z] Copying: 177/1024 [MB] (16 MBps) [2024-10-08T09:29:15.000Z] Copying: 202/1024 [MB] (25 MBps) [2024-10-08T09:29:15.942Z] Copying: 231/1024 [MB] (29 MBps) [2024-10-08T09:29:16.887Z] Copying: 254/1024 [MB] (22 MBps) [2024-10-08T09:29:17.831Z] Copying: 278/1024 [MB] (23 MBps) [2024-10-08T09:29:18.774Z] Copying: 303/1024 [MB] (25 MBps) [2024-10-08T09:29:19.786Z] Copying: 328/1024 [MB] (24 MBps) [2024-10-08T09:29:20.725Z] Copying: 357/1024 [MB] (29 MBps) [2024-10-08T09:29:21.668Z] Copying: 383/1024 [MB] (25 MBps) [2024-10-08T09:29:22.608Z] Copying: 404/1024 [MB] (21 MBps) [2024-10-08T09:29:23.993Z] Copying: 427/1024 [MB] (22 MBps) [2024-10-08T09:29:24.935Z] Copying: 454/1024 [MB] (27 MBps) [2024-10-08T09:29:25.876Z] Copying: 470/1024 [MB] (15 MBps) [2024-10-08T09:29:26.816Z] Copying: 487/1024 [MB] (16 MBps) [2024-10-08T09:29:27.754Z] Copying: 503/1024 [MB] (16 MBps) [2024-10-08T09:29:28.693Z] Copying: 519/1024 [MB] (16 MBps) [2024-10-08T09:29:29.632Z] Copying: 535/1024 [MB] (15 MBps) [2024-10-08T09:29:30.575Z] Copying: 551/1024 [MB] (16 MBps) [2024-10-08T09:29:31.959Z] Copying: 567/1024 [MB] (16 MBps) [2024-10-08T09:29:32.903Z] Copying: 583/1024 [MB] (16 MBps) [2024-10-08T09:29:33.846Z] Copying: 600/1024 [MB] (16 MBps) [2024-10-08T09:29:34.788Z] Copying: 617/1024 [MB] (16 MBps) [2024-10-08T09:29:35.732Z] Copying: 633/1024 [MB] (16 MBps) [2024-10-08T09:29:36.715Z] Copying: 649/1024 [MB] (16 MBps) [2024-10-08T09:29:37.691Z] Copying: 666/1024 [MB] (16 MBps) [2024-10-08T09:29:38.633Z] Copying: 685/1024 [MB] (19 MBps) [2024-10-08T09:29:39.576Z] Copying: 701/1024 [MB] (16 MBps) [2024-10-08T09:29:40.964Z] Copying: 717/1024 [MB] (16 MBps) [2024-10-08T09:29:41.908Z] Copying: 734/1024 [MB] (16 MBps) [2024-10-08T09:29:42.851Z] Copying: 750/1024 [MB] (16 MBps) [2024-10-08T09:29:43.795Z] Copying: 767/1024 [MB] (16 MBps) [2024-10-08T09:29:44.738Z] Copying: 796/1024 [MB] (28 MBps) [2024-10-08T09:29:45.682Z] Copying: 821/1024 [MB] (25 MBps) [2024-10-08T09:29:46.625Z] Copying: 846/1024 [MB] (25 MBps) [2024-10-08T09:29:48.013Z] Copying: 862/1024 [MB] (15 MBps) [2024-10-08T09:29:48.583Z] Copying: 892/1024 [MB] (30 MBps) [2024-10-08T09:29:49.970Z] Copying: 910/1024 [MB] (17 MBps) [2024-10-08T09:29:50.916Z] Copying: 944/1024 [MB] (34 MBps) [2024-10-08T09:29:51.860Z] Copying: 967/1024 [MB] (22 MBps) [2024-10-08T09:29:52.803Z] Copying: 995/1024 [MB] (27 MBps) [2024-10-08T09:29:53.065Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-10-08 09:29:52.980831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.382 [2024-10-08 09:29:52.980920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:01.382 [2024-10-08 09:29:52.980937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:01.382 [2024-10-08 09:29:52.980947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.382 [2024-10-08 09:29:52.980973] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:01.382 [2024-10-08 09:29:52.984587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.382 [2024-10-08 09:29:52.984747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:01.382 [2024-10-08 09:29:52.984820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.596 ms 00:24:01.382 [2024-10-08 09:29:52.984844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.382 [2024-10-08 09:29:52.985105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.382 [2024-10-08 09:29:52.985135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:01.382 [2024-10-08 09:29:52.985157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:24:01.382 [2024-10-08 09:29:52.985221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.382 [2024-10-08 09:29:53.000941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.382 [2024-10-08 09:29:53.001137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:01.382 [2024-10-08 09:29:53.001237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.685 ms 00:24:01.382 [2024-10-08 09:29:53.001273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.382 [2024-10-08 09:29:53.007802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.382 [2024-10-08 09:29:53.007979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:01.382 [2024-10-08 09:29:53.008043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.456 ms 00:24:01.382 [2024-10-08 09:29:53.008068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.382 [2024-10-08 09:29:53.036309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.382 [2024-10-08 09:29:53.036498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:01.382 [2024-10-08 09:29:53.036701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.159 ms 00:24:01.382 [2024-10-08 09:29:53.036743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.382 [2024-10-08 09:29:53.053938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.382 [2024-10-08 09:29:53.054129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:01.382 [2024-10-08 09:29:53.054152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.060 ms 00:24:01.382 [2024-10-08 09:29:53.054162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.382 [2024-10-08 09:29:53.058920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.382 [2024-10-08 09:29:53.058965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:01.382 [2024-10-08 09:29:53.058977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.704 ms 00:24:01.382 [2024-10-08 09:29:53.058987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.644 [2024-10-08 09:29:53.084904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.644 [2024-10-08 09:29:53.085059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:01.644 [2024-10-08 09:29:53.085079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.902 ms 00:24:01.644 [2024-10-08 09:29:53.085086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.644 [2024-10-08 09:29:53.110310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.644 [2024-10-08 09:29:53.110367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:01.644 [2024-10-08 09:29:53.110380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.189 ms 00:24:01.644 [2024-10-08 09:29:53.110404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.644 [2024-10-08 09:29:53.135025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.644 [2024-10-08 09:29:53.135065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:01.644 [2024-10-08 09:29:53.135077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.578 ms 00:24:01.644 [2024-10-08 09:29:53.135084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.644 [2024-10-08 09:29:53.159308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.644 [2024-10-08 09:29:53.159503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:01.644 [2024-10-08 09:29:53.159524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.153 ms 00:24:01.644 [2024-10-08 09:29:53.159532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.644 [2024-10-08 09:29:53.159568] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:01.644 [2024-10-08 09:29:53.159583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:01.644 [2024-10-08 09:29:53.159601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:01.644 [2024-10-08 09:29:53.159610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:01.644 [2024-10-08 09:29:53.159963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.159970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.159978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.159986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.159994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:01.645 [2024-10-08 09:29:53.160417] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:01.645 [2024-10-08 09:29:53.160426] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 54e93b64-5b47-437d-a677-89097ad5eeb3 00:24:01.645 [2024-10-08 09:29:53.160434] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:01.645 [2024-10-08 09:29:53.160442] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 174016 00:24:01.645 [2024-10-08 09:29:53.160450] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 172032 00:24:01.645 [2024-10-08 09:29:53.160459] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0115 00:24:01.645 [2024-10-08 09:29:53.160466] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:01.645 [2024-10-08 09:29:53.160474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:01.645 [2024-10-08 09:29:53.160482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:01.645 [2024-10-08 09:29:53.160489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:01.645 [2024-10-08 09:29:53.160495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:01.645 [2024-10-08 09:29:53.160503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.645 [2024-10-08 09:29:53.160511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:01.645 [2024-10-08 09:29:53.160527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:24:01.645 [2024-10-08 09:29:53.160537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.645 [2024-10-08 09:29:53.174479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.645 [2024-10-08 09:29:53.174623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:01.645 [2024-10-08 09:29:53.174675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.923 ms 00:24:01.645 [2024-10-08 09:29:53.174699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.645 [2024-10-08 09:29:53.175118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.645 [2024-10-08 09:29:53.175172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:01.645 [2024-10-08 09:29:53.175243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:24:01.645 [2024-10-08 09:29:53.175264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.645 [2024-10-08 09:29:53.206895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.645 [2024-10-08 09:29:53.207050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:01.645 [2024-10-08 09:29:53.207108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.645 [2024-10-08 09:29:53.207131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.645 [2024-10-08 09:29:53.207200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.645 [2024-10-08 09:29:53.207229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:01.645 [2024-10-08 09:29:53.207249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.645 [2024-10-08 09:29:53.207268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.645 [2024-10-08 09:29:53.207360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.645 [2024-10-08 09:29:53.207581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:01.645 [2024-10-08 09:29:53.207609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.645 [2024-10-08 09:29:53.207628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.645 [2024-10-08 09:29:53.207659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.645 [2024-10-08 09:29:53.207680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:01.645 [2024-10-08 09:29:53.207705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.645 [2024-10-08 09:29:53.207725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.645 [2024-10-08 09:29:53.290850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.645 [2024-10-08 09:29:53.291050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:01.645 [2024-10-08 09:29:53.291108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.645 [2024-10-08 09:29:53.291132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.906 [2024-10-08 09:29:53.359458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.906 [2024-10-08 09:29:53.359722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:01.906 [2024-10-08 09:29:53.359937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.906 [2024-10-08 09:29:53.359981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.906 [2024-10-08 09:29:53.360052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.906 [2024-10-08 09:29:53.360074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:01.906 [2024-10-08 09:29:53.360095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.906 [2024-10-08 09:29:53.360114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.906 [2024-10-08 09:29:53.360187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.906 [2024-10-08 09:29:53.360211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:01.906 [2024-10-08 09:29:53.360233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.907 [2024-10-08 09:29:53.360304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.907 [2024-10-08 09:29:53.360451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.907 [2024-10-08 09:29:53.360578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:01.907 [2024-10-08 09:29:53.360603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.907 [2024-10-08 09:29:53.360622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.907 [2024-10-08 09:29:53.360673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.907 [2024-10-08 09:29:53.360696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:01.907 [2024-10-08 09:29:53.360716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.907 [2024-10-08 09:29:53.360735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.907 [2024-10-08 09:29:53.360793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.907 [2024-10-08 09:29:53.360815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:01.907 [2024-10-08 09:29:53.360886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.907 [2024-10-08 09:29:53.360908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.907 [2024-10-08 09:29:53.360970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.907 [2024-10-08 09:29:53.361041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:01.907 [2024-10-08 09:29:53.361065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.907 [2024-10-08 09:29:53.361091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.907 [2024-10-08 09:29:53.361905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 381.034 ms, result 0 00:24:02.531 00:24:02.531 00:24:02.791 09:29:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:04.706 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:04.706 09:29:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:04.706 [2024-10-08 09:29:56.326551] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:24:04.706 [2024-10-08 09:29:56.326644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78594 ] 00:24:04.967 [2024-10-08 09:29:56.472250] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.227 [2024-10-08 09:29:56.685237] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.490 [2024-10-08 09:29:56.975294] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:05.490 [2024-10-08 09:29:56.975688] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:05.490 [2024-10-08 09:29:57.136487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.490 [2024-10-08 09:29:57.136546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:05.490 [2024-10-08 09:29:57.136561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:05.490 [2024-10-08 09:29:57.136569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.490 [2024-10-08 09:29:57.136627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.490 [2024-10-08 09:29:57.136638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:05.490 [2024-10-08 09:29:57.136646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:05.490 [2024-10-08 09:29:57.136655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.490 [2024-10-08 09:29:57.136675] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:05.490 [2024-10-08 09:29:57.137363] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:05.490 [2024-10-08 09:29:57.137382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.490 [2024-10-08 09:29:57.137421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:05.490 [2024-10-08 09:29:57.137430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:24:05.490 [2024-10-08 09:29:57.137438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.490 [2024-10-08 09:29:57.139190] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:05.490 [2024-10-08 09:29:57.153629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.490 [2024-10-08 09:29:57.153681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:05.490 [2024-10-08 09:29:57.153695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.441 ms 00:24:05.490 [2024-10-08 09:29:57.153703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.490 [2024-10-08 09:29:57.153778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.490 [2024-10-08 09:29:57.153788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:05.490 [2024-10-08 09:29:57.153797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:05.490 [2024-10-08 09:29:57.153805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.490 [2024-10-08 09:29:57.161781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.490 [2024-10-08 09:29:57.161956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:05.490 [2024-10-08 09:29:57.161973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.900 ms 00:24:05.490 [2024-10-08 09:29:57.161982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.490 [2024-10-08 09:29:57.162067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.490 [2024-10-08 09:29:57.162077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:05.490 [2024-10-08 09:29:57.162085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:05.490 [2024-10-08 09:29:57.162093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.490 [2024-10-08 09:29:57.162139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.490 [2024-10-08 09:29:57.162149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:05.490 [2024-10-08 09:29:57.162158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:05.490 [2024-10-08 09:29:57.162166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.490 [2024-10-08 09:29:57.162190] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:05.490 [2024-10-08 09:29:57.166429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.490 [2024-10-08 09:29:57.166486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:05.490 [2024-10-08 09:29:57.166497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.244 ms 00:24:05.490 [2024-10-08 09:29:57.166505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.490 [2024-10-08 09:29:57.166539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.490 [2024-10-08 09:29:57.166548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:05.490 [2024-10-08 09:29:57.166557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:05.490 [2024-10-08 09:29:57.166565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.490 [2024-10-08 09:29:57.166619] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:05.490 [2024-10-08 09:29:57.166642] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:05.490 [2024-10-08 09:29:57.166680] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:05.490 [2024-10-08 09:29:57.166696] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:05.490 [2024-10-08 09:29:57.166804] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:05.490 [2024-10-08 09:29:57.166816] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:05.490 [2024-10-08 09:29:57.166827] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:05.490 [2024-10-08 09:29:57.166843] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:05.490 [2024-10-08 09:29:57.166852] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:05.490 [2024-10-08 09:29:57.166861] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:05.490 [2024-10-08 09:29:57.166868] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:05.490 [2024-10-08 09:29:57.166876] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:05.490 [2024-10-08 09:29:57.166884] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:05.490 [2024-10-08 09:29:57.166893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.490 [2024-10-08 09:29:57.166906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:05.490 [2024-10-08 09:29:57.166916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:24:05.490 [2024-10-08 09:29:57.166924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.490 [2024-10-08 09:29:57.167011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.490 [2024-10-08 09:29:57.167028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:05.490 [2024-10-08 09:29:57.167037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:05.490 [2024-10-08 09:29:57.167044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.490 [2024-10-08 09:29:57.167148] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:05.490 [2024-10-08 09:29:57.167159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:05.490 [2024-10-08 09:29:57.167168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:05.490 [2024-10-08 09:29:57.167175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.490 [2024-10-08 09:29:57.167184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:05.490 [2024-10-08 09:29:57.167192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:05.490 [2024-10-08 09:29:57.167199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:05.490 [2024-10-08 09:29:57.167206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:05.490 [2024-10-08 09:29:57.167213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:05.490 [2024-10-08 09:29:57.167219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:05.490 [2024-10-08 09:29:57.167226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:05.490 [2024-10-08 09:29:57.167232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:05.490 [2024-10-08 09:29:57.167239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:05.490 [2024-10-08 09:29:57.167253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:05.490 [2024-10-08 09:29:57.167262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:05.490 [2024-10-08 09:29:57.167269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.490 [2024-10-08 09:29:57.167276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:05.490 [2024-10-08 09:29:57.167283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:05.490 [2024-10-08 09:29:57.167289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.490 [2024-10-08 09:29:57.167296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:05.490 [2024-10-08 09:29:57.167304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:05.490 [2024-10-08 09:29:57.167310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:05.490 [2024-10-08 09:29:57.167317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:05.490 [2024-10-08 09:29:57.167324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:05.491 [2024-10-08 09:29:57.167330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:05.491 [2024-10-08 09:29:57.167337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:05.491 [2024-10-08 09:29:57.167344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:05.491 [2024-10-08 09:29:57.167351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:05.491 [2024-10-08 09:29:57.167357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:05.491 [2024-10-08 09:29:57.167363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:05.491 [2024-10-08 09:29:57.167370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:05.491 [2024-10-08 09:29:57.167377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:05.491 [2024-10-08 09:29:57.167426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:05.491 [2024-10-08 09:29:57.167434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:05.491 [2024-10-08 09:29:57.167441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:05.491 [2024-10-08 09:29:57.167448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:05.491 [2024-10-08 09:29:57.167455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:05.491 [2024-10-08 09:29:57.167462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:05.491 [2024-10-08 09:29:57.167469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:05.491 [2024-10-08 09:29:57.167475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.491 [2024-10-08 09:29:57.167482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:05.491 [2024-10-08 09:29:57.167489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:05.491 [2024-10-08 09:29:57.167496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.491 [2024-10-08 09:29:57.167503] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:05.491 [2024-10-08 09:29:57.167511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:05.491 [2024-10-08 09:29:57.167521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:05.491 [2024-10-08 09:29:57.167531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:05.491 [2024-10-08 09:29:57.167539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:05.491 [2024-10-08 09:29:57.167546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:05.491 [2024-10-08 09:29:57.167554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:05.491 [2024-10-08 09:29:57.167561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:05.491 [2024-10-08 09:29:57.167568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:05.491 [2024-10-08 09:29:57.167575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:05.491 [2024-10-08 09:29:57.167583] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:05.491 [2024-10-08 09:29:57.167593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:05.491 [2024-10-08 09:29:57.167601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:05.491 [2024-10-08 09:29:57.167609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:05.491 [2024-10-08 09:29:57.167617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:05.491 [2024-10-08 09:29:57.167624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:05.491 [2024-10-08 09:29:57.167632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:05.491 [2024-10-08 09:29:57.167639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:05.491 [2024-10-08 09:29:57.167646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:05.491 [2024-10-08 09:29:57.167653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:05.491 [2024-10-08 09:29:57.167660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:05.491 [2024-10-08 09:29:57.167668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:05.491 [2024-10-08 09:29:57.167675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:05.491 [2024-10-08 09:29:57.167682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:05.491 [2024-10-08 09:29:57.167689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:05.491 [2024-10-08 09:29:57.167697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:05.491 [2024-10-08 09:29:57.167704] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:05.491 [2024-10-08 09:29:57.167712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:05.491 [2024-10-08 09:29:57.167721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:05.491 [2024-10-08 09:29:57.167728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:05.491 [2024-10-08 09:29:57.167735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:05.491 [2024-10-08 09:29:57.167742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:05.491 [2024-10-08 09:29:57.167749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.491 [2024-10-08 09:29:57.167757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:05.491 [2024-10-08 09:29:57.167764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:24:05.491 [2024-10-08 09:29:57.167774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.753 [2024-10-08 09:29:57.215006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.753 [2024-10-08 09:29:57.215208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:05.753 [2024-10-08 09:29:57.215229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.183 ms 00:24:05.753 [2024-10-08 09:29:57.215238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.753 [2024-10-08 09:29:57.215345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.753 [2024-10-08 09:29:57.215354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:05.753 [2024-10-08 09:29:57.215363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:05.753 [2024-10-08 09:29:57.215371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.753 [2024-10-08 09:29:57.249988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.753 [2024-10-08 09:29:57.250160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:05.753 [2024-10-08 09:29:57.250184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.511 ms 00:24:05.753 [2024-10-08 09:29:57.250193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.753 [2024-10-08 09:29:57.250231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.753 [2024-10-08 09:29:57.250240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:05.753 [2024-10-08 09:29:57.250249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:05.753 [2024-10-08 09:29:57.250257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.753 [2024-10-08 09:29:57.250843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.753 [2024-10-08 09:29:57.250865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:05.753 [2024-10-08 09:29:57.250875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:24:05.753 [2024-10-08 09:29:57.250893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.753 [2024-10-08 09:29:57.251052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.753 [2024-10-08 09:29:57.251073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:05.753 [2024-10-08 09:29:57.251084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:24:05.753 [2024-10-08 09:29:57.251092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.753 [2024-10-08 09:29:57.265489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.753 [2024-10-08 09:29:57.265531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:05.753 [2024-10-08 09:29:57.265542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.377 ms 00:24:05.753 [2024-10-08 09:29:57.265550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.753 [2024-10-08 09:29:57.279674] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:05.753 [2024-10-08 09:29:57.279721] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:05.753 [2024-10-08 09:29:57.279734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.753 [2024-10-08 09:29:57.279743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:05.753 [2024-10-08 09:29:57.279753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.077 ms 00:24:05.753 [2024-10-08 09:29:57.279760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.753 [2024-10-08 09:29:57.305431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.753 [2024-10-08 09:29:57.305477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:05.753 [2024-10-08 09:29:57.305488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.620 ms 00:24:05.753 [2024-10-08 09:29:57.305497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.753 [2024-10-08 09:29:57.318114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.753 [2024-10-08 09:29:57.318155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:05.753 [2024-10-08 09:29:57.318166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.565 ms 00:24:05.753 [2024-10-08 09:29:57.318174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.753 [2024-10-08 09:29:57.330400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.753 [2024-10-08 09:29:57.330441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:05.753 [2024-10-08 09:29:57.330453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.171 ms 00:24:05.753 [2024-10-08 09:29:57.330460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.753 [2024-10-08 09:29:57.331100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.753 [2024-10-08 09:29:57.331117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:05.753 [2024-10-08 09:29:57.331127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:24:05.753 [2024-10-08 09:29:57.331135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.754 [2024-10-08 09:29:57.395275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.754 [2024-10-08 09:29:57.395337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:05.754 [2024-10-08 09:29:57.395351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.121 ms 00:24:05.754 [2024-10-08 09:29:57.395360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.754 [2024-10-08 09:29:57.406464] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:05.754 [2024-10-08 09:29:57.409348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.754 [2024-10-08 09:29:57.409541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:05.754 [2024-10-08 09:29:57.409561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.896 ms 00:24:05.754 [2024-10-08 09:29:57.409576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.754 [2024-10-08 09:29:57.409658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.754 [2024-10-08 09:29:57.409669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:05.754 [2024-10-08 09:29:57.409679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:05.754 [2024-10-08 09:29:57.409686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.754 [2024-10-08 09:29:57.410489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.754 [2024-10-08 09:29:57.410522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:05.754 [2024-10-08 09:29:57.410535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:24:05.754 [2024-10-08 09:29:57.410544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.754 [2024-10-08 09:29:57.410578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.754 [2024-10-08 09:29:57.410588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:05.754 [2024-10-08 09:29:57.410598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:05.754 [2024-10-08 09:29:57.410614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.754 [2024-10-08 09:29:57.410653] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:05.754 [2024-10-08 09:29:57.410665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.754 [2024-10-08 09:29:57.410674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:05.754 [2024-10-08 09:29:57.410683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:05.754 [2024-10-08 09:29:57.410696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.015 [2024-10-08 09:29:57.435774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.015 [2024-10-08 09:29:57.435823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:06.015 [2024-10-08 09:29:57.435837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.057 ms 00:24:06.015 [2024-10-08 09:29:57.435846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.016 [2024-10-08 09:29:57.435934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.016 [2024-10-08 09:29:57.435945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:06.016 [2024-10-08 09:29:57.435954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:06.016 [2024-10-08 09:29:57.435962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.016 [2024-10-08 09:29:57.437188] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.211 ms, result 0 00:24:06.958  [2024-10-08T09:30:00.029Z] Copying: 16/1024 [MB] (16 MBps) [2024-10-08T09:30:00.973Z] Copying: 39/1024 [MB] (22 MBps) [2024-10-08T09:30:01.917Z] Copying: 66/1024 [MB] (27 MBps) [2024-10-08T09:30:02.860Z] Copying: 82/1024 [MB] (16 MBps) [2024-10-08T09:30:03.800Z] Copying: 101/1024 [MB] (19 MBps) [2024-10-08T09:30:04.743Z] Copying: 127/1024 [MB] (26 MBps) [2024-10-08T09:30:05.685Z] Copying: 145/1024 [MB] (18 MBps) [2024-10-08T09:30:06.629Z] Copying: 168/1024 [MB] (22 MBps) [2024-10-08T09:30:08.016Z] Copying: 184/1024 [MB] (16 MBps) [2024-10-08T09:30:08.958Z] Copying: 195/1024 [MB] (10 MBps) [2024-10-08T09:30:09.901Z] Copying: 208/1024 [MB] (13 MBps) [2024-10-08T09:30:10.843Z] Copying: 228/1024 [MB] (20 MBps) [2024-10-08T09:30:11.820Z] Copying: 239/1024 [MB] (10 MBps) [2024-10-08T09:30:12.764Z] Copying: 253/1024 [MB] (14 MBps) [2024-10-08T09:30:13.708Z] Copying: 264/1024 [MB] (10 MBps) [2024-10-08T09:30:14.652Z] Copying: 274/1024 [MB] (10 MBps) [2024-10-08T09:30:16.040Z] Copying: 285/1024 [MB] (10 MBps) [2024-10-08T09:30:16.984Z] Copying: 295/1024 [MB] (10 MBps) [2024-10-08T09:30:17.926Z] Copying: 309/1024 [MB] (13 MBps) [2024-10-08T09:30:18.875Z] Copying: 320/1024 [MB] (11 MBps) [2024-10-08T09:30:19.819Z] Copying: 330/1024 [MB] (10 MBps) [2024-10-08T09:30:20.761Z] Copying: 348/1024 [MB] (17 MBps) [2024-10-08T09:30:21.704Z] Copying: 367/1024 [MB] (18 MBps) [2024-10-08T09:30:22.647Z] Copying: 380/1024 [MB] (13 MBps) [2024-10-08T09:30:24.033Z] Copying: 395/1024 [MB] (14 MBps) [2024-10-08T09:30:24.978Z] Copying: 412/1024 [MB] (17 MBps) [2024-10-08T09:30:25.920Z] Copying: 427/1024 [MB] (14 MBps) [2024-10-08T09:30:26.864Z] Copying: 445/1024 [MB] (18 MBps) [2024-10-08T09:30:27.807Z] Copying: 460/1024 [MB] (14 MBps) [2024-10-08T09:30:28.779Z] Copying: 479/1024 [MB] (18 MBps) [2024-10-08T09:30:29.723Z] Copying: 491/1024 [MB] (11 MBps) [2024-10-08T09:30:30.664Z] Copying: 509/1024 [MB] (18 MBps) [2024-10-08T09:30:32.048Z] Copying: 523/1024 [MB] (14 MBps) [2024-10-08T09:30:32.619Z] Copying: 539/1024 [MB] (16 MBps) [2024-10-08T09:30:34.002Z] Copying: 557/1024 [MB] (17 MBps) [2024-10-08T09:30:34.943Z] Copying: 570/1024 [MB] (13 MBps) [2024-10-08T09:30:35.883Z] Copying: 588/1024 [MB] (18 MBps) [2024-10-08T09:30:36.824Z] Copying: 600/1024 [MB] (11 MBps) [2024-10-08T09:30:37.765Z] Copying: 610/1024 [MB] (10 MBps) [2024-10-08T09:30:38.705Z] Copying: 621/1024 [MB] (10 MBps) [2024-10-08T09:30:39.644Z] Copying: 632/1024 [MB] (10 MBps) [2024-10-08T09:30:41.027Z] Copying: 642/1024 [MB] (10 MBps) [2024-10-08T09:30:41.966Z] Copying: 662/1024 [MB] (19 MBps) [2024-10-08T09:30:42.907Z] Copying: 678/1024 [MB] (16 MBps) [2024-10-08T09:30:43.847Z] Copying: 694/1024 [MB] (15 MBps) [2024-10-08T09:30:44.787Z] Copying: 708/1024 [MB] (14 MBps) [2024-10-08T09:30:45.758Z] Copying: 722/1024 [MB] (14 MBps) [2024-10-08T09:30:46.701Z] Copying: 738/1024 [MB] (15 MBps) [2024-10-08T09:30:47.646Z] Copying: 756/1024 [MB] (17 MBps) [2024-10-08T09:30:49.031Z] Copying: 773/1024 [MB] (17 MBps) [2024-10-08T09:30:49.974Z] Copying: 785/1024 [MB] (11 MBps) [2024-10-08T09:30:50.919Z] Copying: 803/1024 [MB] (18 MBps) [2024-10-08T09:30:51.865Z] Copying: 821/1024 [MB] (17 MBps) [2024-10-08T09:30:52.809Z] Copying: 836/1024 [MB] (14 MBps) [2024-10-08T09:30:53.754Z] Copying: 853/1024 [MB] (17 MBps) [2024-10-08T09:30:54.698Z] Copying: 868/1024 [MB] (14 MBps) [2024-10-08T09:30:55.642Z] Copying: 880/1024 [MB] (11 MBps) [2024-10-08T09:30:57.028Z] Copying: 902/1024 [MB] (22 MBps) [2024-10-08T09:30:57.972Z] Copying: 917/1024 [MB] (14 MBps) [2024-10-08T09:30:58.917Z] Copying: 937/1024 [MB] (20 MBps) [2024-10-08T09:30:59.862Z] Copying: 949/1024 [MB] (11 MBps) [2024-10-08T09:31:00.805Z] Copying: 959/1024 [MB] (10 MBps) [2024-10-08T09:31:01.749Z] Copying: 970/1024 [MB] (10 MBps) [2024-10-08T09:31:02.693Z] Copying: 990/1024 [MB] (20 MBps) [2024-10-08T09:31:03.312Z] Copying: 1011/1024 [MB] (21 MBps) [2024-10-08T09:31:03.312Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-10-08 09:31:03.180957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.629 [2024-10-08 09:31:03.181149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:11.629 [2024-10-08 09:31:03.181249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:11.629 [2024-10-08 09:31:03.181278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.629 [2024-10-08 09:31:03.181328] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:11.629 [2024-10-08 09:31:03.184533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.629 [2024-10-08 09:31:03.184693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:11.629 [2024-10-08 09:31:03.184763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.164 ms 00:25:11.629 [2024-10-08 09:31:03.185108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.629 [2024-10-08 09:31:03.185368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.629 [2024-10-08 09:31:03.185508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:11.629 [2024-10-08 09:31:03.185538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:25:11.629 [2024-10-08 09:31:03.185597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.629 [2024-10-08 09:31:03.189354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.629 [2024-10-08 09:31:03.189483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:11.629 [2024-10-08 09:31:03.189545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.717 ms 00:25:11.629 [2024-10-08 09:31:03.189567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.629 [2024-10-08 09:31:03.195989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.629 [2024-10-08 09:31:03.196134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:11.629 [2024-10-08 09:31:03.196197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.387 ms 00:25:11.629 [2024-10-08 09:31:03.196220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.629 [2024-10-08 09:31:03.222782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.630 [2024-10-08 09:31:03.222952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:11.630 [2024-10-08 09:31:03.223011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.489 ms 00:25:11.630 [2024-10-08 09:31:03.223033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.630 [2024-10-08 09:31:03.238361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.630 [2024-10-08 09:31:03.238492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:11.630 [2024-10-08 09:31:03.238543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.283 ms 00:25:11.630 [2024-10-08 09:31:03.238565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.630 [2024-10-08 09:31:03.242583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.630 [2024-10-08 09:31:03.242679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:11.630 [2024-10-08 09:31:03.242725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.953 ms 00:25:11.630 [2024-10-08 09:31:03.242747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.630 [2024-10-08 09:31:03.266707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.630 [2024-10-08 09:31:03.266812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:11.630 [2024-10-08 09:31:03.266858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.934 ms 00:25:11.630 [2024-10-08 09:31:03.266879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.630 [2024-10-08 09:31:03.299071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.630 [2024-10-08 09:31:03.299211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:11.630 [2024-10-08 09:31:03.299269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.948 ms 00:25:11.630 [2024-10-08 09:31:03.299291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.893 [2024-10-08 09:31:03.322498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.893 [2024-10-08 09:31:03.322621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:11.893 [2024-10-08 09:31:03.322672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.164 ms 00:25:11.893 [2024-10-08 09:31:03.322694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.893 [2024-10-08 09:31:03.345988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.893 [2024-10-08 09:31:03.346106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:11.893 [2024-10-08 09:31:03.346121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.128 ms 00:25:11.893 [2024-10-08 09:31:03.346129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.893 [2024-10-08 09:31:03.346157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:11.893 [2024-10-08 09:31:03.346171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:11.893 [2024-10-08 09:31:03.346181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:11.893 [2024-10-08 09:31:03.346189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:11.893 [2024-10-08 09:31:03.346622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:11.894 [2024-10-08 09:31:03.346939] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:11.894 [2024-10-08 09:31:03.346946] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 54e93b64-5b47-437d-a677-89097ad5eeb3 00:25:11.894 [2024-10-08 09:31:03.346953] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:11.894 [2024-10-08 09:31:03.346961] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:11.894 [2024-10-08 09:31:03.346968] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:11.894 [2024-10-08 09:31:03.346975] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:11.894 [2024-10-08 09:31:03.346982] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:11.894 [2024-10-08 09:31:03.346989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:11.894 [2024-10-08 09:31:03.347000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:11.894 [2024-10-08 09:31:03.347006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:11.894 [2024-10-08 09:31:03.347012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:11.894 [2024-10-08 09:31:03.347019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.894 [2024-10-08 09:31:03.347032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:11.894 [2024-10-08 09:31:03.347040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:25:11.894 [2024-10-08 09:31:03.347047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.359821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.894 [2024-10-08 09:31:03.359852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:11.894 [2024-10-08 09:31:03.359862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.757 ms 00:25:11.894 [2024-10-08 09:31:03.359874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.360228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.894 [2024-10-08 09:31:03.360237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:11.894 [2024-10-08 09:31:03.360246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:25:11.894 [2024-10-08 09:31:03.360253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.389252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.894 [2024-10-08 09:31:03.389289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:11.894 [2024-10-08 09:31:03.389302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.894 [2024-10-08 09:31:03.389310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.389357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.894 [2024-10-08 09:31:03.389366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:11.894 [2024-10-08 09:31:03.389373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.894 [2024-10-08 09:31:03.389381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.389445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.894 [2024-10-08 09:31:03.389455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:11.894 [2024-10-08 09:31:03.389463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.894 [2024-10-08 09:31:03.389474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.389488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.894 [2024-10-08 09:31:03.389496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:11.894 [2024-10-08 09:31:03.389504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.894 [2024-10-08 09:31:03.389511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.468795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.894 [2024-10-08 09:31:03.468855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:11.894 [2024-10-08 09:31:03.468873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.894 [2024-10-08 09:31:03.468881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.537707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.894 [2024-10-08 09:31:03.537762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:11.894 [2024-10-08 09:31:03.537774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.894 [2024-10-08 09:31:03.537782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.537869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.894 [2024-10-08 09:31:03.537879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:11.894 [2024-10-08 09:31:03.537889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.894 [2024-10-08 09:31:03.537898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.537937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.894 [2024-10-08 09:31:03.537947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:11.894 [2024-10-08 09:31:03.537955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.894 [2024-10-08 09:31:03.537964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.538058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.894 [2024-10-08 09:31:03.538069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:11.894 [2024-10-08 09:31:03.538078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.894 [2024-10-08 09:31:03.538086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.538119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.894 [2024-10-08 09:31:03.538130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:11.894 [2024-10-08 09:31:03.538138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.894 [2024-10-08 09:31:03.538147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.538190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.894 [2024-10-08 09:31:03.538200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:11.894 [2024-10-08 09:31:03.538209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.894 [2024-10-08 09:31:03.538217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.894 [2024-10-08 09:31:03.538270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.895 [2024-10-08 09:31:03.538281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:11.895 [2024-10-08 09:31:03.538290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.895 [2024-10-08 09:31:03.538298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.895 [2024-10-08 09:31:03.538470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 357.439 ms, result 0 00:25:12.838 00:25:12.838 00:25:12.838 09:31:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:14.750 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:14.750 09:31:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:14.750 09:31:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:14.750 09:31:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:14.750 09:31:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:15.009 09:31:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:15.009 09:31:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:15.009 09:31:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:15.009 Process with pid 76349 is not found 00:25:15.009 09:31:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 76349 00:25:15.009 09:31:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 76349 ']' 00:25:15.009 09:31:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 76349 00:25:15.009 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76349) - No such process 00:25:15.009 09:31:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 76349 is not found' 00:25:15.009 09:31:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:15.267 Remove shared memory files 00:25:15.267 09:31:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:15.267 09:31:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:15.267 09:31:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:15.267 09:31:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:15.267 09:31:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:15.267 09:31:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:15.267 09:31:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:15.267 ************************************ 00:25:15.267 END TEST ftl_dirty_shutdown 00:25:15.267 ************************************ 00:25:15.267 00:25:15.267 real 4m41.459s 00:25:15.267 user 4m59.012s 00:25:15.267 sys 0m26.326s 00:25:15.267 09:31:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:15.267 09:31:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:15.527 09:31:06 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:15.527 09:31:06 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:15.527 09:31:06 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:15.527 09:31:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:15.527 ************************************ 00:25:15.527 START TEST ftl_upgrade_shutdown 00:25:15.527 ************************************ 00:25:15.527 09:31:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:15.527 * Looking for test storage... 00:25:15.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:15.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.527 --rc genhtml_branch_coverage=1 00:25:15.527 --rc genhtml_function_coverage=1 00:25:15.527 --rc genhtml_legend=1 00:25:15.527 --rc geninfo_all_blocks=1 00:25:15.527 --rc geninfo_unexecuted_blocks=1 00:25:15.527 00:25:15.527 ' 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:15.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.527 --rc genhtml_branch_coverage=1 00:25:15.527 --rc genhtml_function_coverage=1 00:25:15.527 --rc genhtml_legend=1 00:25:15.527 --rc geninfo_all_blocks=1 00:25:15.527 --rc geninfo_unexecuted_blocks=1 00:25:15.527 00:25:15.527 ' 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:15.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.527 --rc genhtml_branch_coverage=1 00:25:15.527 --rc genhtml_function_coverage=1 00:25:15.527 --rc genhtml_legend=1 00:25:15.527 --rc geninfo_all_blocks=1 00:25:15.527 --rc geninfo_unexecuted_blocks=1 00:25:15.527 00:25:15.527 ' 00:25:15.527 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:15.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.527 --rc genhtml_branch_coverage=1 00:25:15.527 --rc genhtml_function_coverage=1 00:25:15.528 --rc genhtml_legend=1 00:25:15.528 --rc geninfo_all_blocks=1 00:25:15.528 --rc geninfo_unexecuted_blocks=1 00:25:15.528 00:25:15.528 ' 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79383 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79383 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 79383 ']' 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:15.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:15.528 09:31:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:15.789 [2024-10-08 09:31:07.236994] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:25:15.789 [2024-10-08 09:31:07.237130] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79383 ] 00:25:15.789 [2024-10-08 09:31:07.389220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.048 [2024-10-08 09:31:07.584837] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:16.620 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:16.880 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:16.880 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:16.880 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:16.880 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:25:16.880 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:16.880 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:16.880 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:16.880 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:17.140 { 00:25:17.140 "name": "basen1", 00:25:17.140 "aliases": [ 00:25:17.140 "51e87ca5-40d0-445a-9b5c-b1213c870f39" 00:25:17.140 ], 00:25:17.140 "product_name": "NVMe disk", 00:25:17.140 "block_size": 4096, 00:25:17.140 "num_blocks": 1310720, 00:25:17.140 "uuid": "51e87ca5-40d0-445a-9b5c-b1213c870f39", 00:25:17.140 "numa_id": -1, 00:25:17.140 "assigned_rate_limits": { 00:25:17.140 "rw_ios_per_sec": 0, 00:25:17.140 "rw_mbytes_per_sec": 0, 00:25:17.140 "r_mbytes_per_sec": 0, 00:25:17.140 "w_mbytes_per_sec": 0 00:25:17.140 }, 00:25:17.140 "claimed": true, 00:25:17.140 "claim_type": "read_many_write_one", 00:25:17.140 "zoned": false, 00:25:17.140 "supported_io_types": { 00:25:17.140 "read": true, 00:25:17.140 "write": true, 00:25:17.140 "unmap": true, 00:25:17.140 "flush": true, 00:25:17.140 "reset": true, 00:25:17.140 "nvme_admin": true, 00:25:17.140 "nvme_io": true, 00:25:17.140 "nvme_io_md": false, 00:25:17.140 "write_zeroes": true, 00:25:17.140 "zcopy": false, 00:25:17.140 "get_zone_info": false, 00:25:17.140 "zone_management": false, 00:25:17.140 "zone_append": false, 00:25:17.140 "compare": true, 00:25:17.140 "compare_and_write": false, 00:25:17.140 "abort": true, 00:25:17.140 "seek_hole": false, 00:25:17.140 "seek_data": false, 00:25:17.140 "copy": true, 00:25:17.140 "nvme_iov_md": false 00:25:17.140 }, 00:25:17.140 "driver_specific": { 00:25:17.140 "nvme": [ 00:25:17.140 { 00:25:17.140 "pci_address": "0000:00:11.0", 00:25:17.140 "trid": { 00:25:17.140 "trtype": "PCIe", 00:25:17.140 "traddr": "0000:00:11.0" 00:25:17.140 }, 00:25:17.140 "ctrlr_data": { 00:25:17.140 "cntlid": 0, 00:25:17.140 "vendor_id": "0x1b36", 00:25:17.140 "model_number": "QEMU NVMe Ctrl", 00:25:17.140 "serial_number": "12341", 00:25:17.140 "firmware_revision": "8.0.0", 00:25:17.140 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:17.140 "oacs": { 00:25:17.140 "security": 0, 00:25:17.140 "format": 1, 00:25:17.140 "firmware": 0, 00:25:17.140 "ns_manage": 1 00:25:17.140 }, 00:25:17.140 "multi_ctrlr": false, 00:25:17.140 "ana_reporting": false 00:25:17.140 }, 00:25:17.140 "vs": { 00:25:17.140 "nvme_version": "1.4" 00:25:17.140 }, 00:25:17.140 "ns_data": { 00:25:17.140 "id": 1, 00:25:17.140 "can_share": false 00:25:17.140 } 00:25:17.140 } 00:25:17.140 ], 00:25:17.140 "mp_policy": "active_passive" 00:25:17.140 } 00:25:17.140 } 00:25:17.140 ]' 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=ef44fc76-3933-4f32-b1ec-f580351013b5 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:17.140 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef44fc76-3933-4f32-b1ec-f580351013b5 00:25:17.401 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:17.662 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=d62e482d-abab-4e0c-8d50-7f018511949c 00:25:17.662 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d62e482d-abab-4e0c-8d50-7f018511949c 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=cd739ff3-e64c-4d84-b34e-bd80ae0df60d 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z cd739ff3-e64c-4d84-b34e-bd80ae0df60d ]] 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 cd739ff3-e64c-4d84-b34e-bd80ae0df60d 5120 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=cd739ff3-e64c-4d84-b34e-bd80ae0df60d 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size cd739ff3-e64c-4d84-b34e-bd80ae0df60d 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=cd739ff3-e64c-4d84-b34e-bd80ae0df60d 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd739ff3-e64c-4d84-b34e-bd80ae0df60d 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:17.921 { 00:25:17.921 "name": "cd739ff3-e64c-4d84-b34e-bd80ae0df60d", 00:25:17.921 "aliases": [ 00:25:17.921 "lvs/basen1p0" 00:25:17.921 ], 00:25:17.921 "product_name": "Logical Volume", 00:25:17.921 "block_size": 4096, 00:25:17.921 "num_blocks": 5242880, 00:25:17.921 "uuid": "cd739ff3-e64c-4d84-b34e-bd80ae0df60d", 00:25:17.921 "assigned_rate_limits": { 00:25:17.921 "rw_ios_per_sec": 0, 00:25:17.921 "rw_mbytes_per_sec": 0, 00:25:17.921 "r_mbytes_per_sec": 0, 00:25:17.921 "w_mbytes_per_sec": 0 00:25:17.921 }, 00:25:17.921 "claimed": false, 00:25:17.921 "zoned": false, 00:25:17.921 "supported_io_types": { 00:25:17.921 "read": true, 00:25:17.921 "write": true, 00:25:17.921 "unmap": true, 00:25:17.921 "flush": false, 00:25:17.921 "reset": true, 00:25:17.921 "nvme_admin": false, 00:25:17.921 "nvme_io": false, 00:25:17.921 "nvme_io_md": false, 00:25:17.921 "write_zeroes": true, 00:25:17.921 "zcopy": false, 00:25:17.921 "get_zone_info": false, 00:25:17.921 "zone_management": false, 00:25:17.921 "zone_append": false, 00:25:17.921 "compare": false, 00:25:17.921 "compare_and_write": false, 00:25:17.921 "abort": false, 00:25:17.921 "seek_hole": true, 00:25:17.921 "seek_data": true, 00:25:17.921 "copy": false, 00:25:17.921 "nvme_iov_md": false 00:25:17.921 }, 00:25:17.921 "driver_specific": { 00:25:17.921 "lvol": { 00:25:17.921 "lvol_store_uuid": "d62e482d-abab-4e0c-8d50-7f018511949c", 00:25:17.921 "base_bdev": "basen1", 00:25:17.921 "thin_provision": true, 00:25:17.921 "num_allocated_clusters": 0, 00:25:17.921 "snapshot": false, 00:25:17.921 "clone": false, 00:25:17.921 "esnap_clone": false 00:25:17.921 } 00:25:17.921 } 00:25:17.921 } 00:25:17.921 ]' 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:17.921 09:31:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:18.182 09:31:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:25:18.182 09:31:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:25:18.182 09:31:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:25:18.182 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:18.182 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:18.182 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:18.182 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:18.182 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:18.182 09:31:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:18.443 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:18.443 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:18.443 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d cd739ff3-e64c-4d84-b34e-bd80ae0df60d -c cachen1p0 --l2p_dram_limit 2 00:25:18.704 [2024-10-08 09:31:10.190616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:18.704 [2024-10-08 09:31:10.190671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:18.704 [2024-10-08 09:31:10.190688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:18.704 [2024-10-08 09:31:10.190696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:18.704 [2024-10-08 09:31:10.190752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:18.704 [2024-10-08 09:31:10.190762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:18.704 [2024-10-08 09:31:10.190772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:25:18.704 [2024-10-08 09:31:10.190779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:18.704 [2024-10-08 09:31:10.190804] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:18.704 [2024-10-08 09:31:10.191537] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:18.704 [2024-10-08 09:31:10.191566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:18.704 [2024-10-08 09:31:10.191574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:18.704 [2024-10-08 09:31:10.191584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.768 ms 00:25:18.704 [2024-10-08 09:31:10.191593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:18.704 [2024-10-08 09:31:10.191663] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID a24f5ea0-0ee9-4699-bd85-513426d27a1f 00:25:18.704 [2024-10-08 09:31:10.192750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:18.704 [2024-10-08 09:31:10.192783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:18.704 [2024-10-08 09:31:10.192793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:18.704 [2024-10-08 09:31:10.192804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:18.704 [2024-10-08 09:31:10.198092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:18.704 [2024-10-08 09:31:10.198124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:18.704 [2024-10-08 09:31:10.198133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.245 ms 00:25:18.704 [2024-10-08 09:31:10.198143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:18.704 [2024-10-08 09:31:10.198193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:18.704 [2024-10-08 09:31:10.198203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:18.704 [2024-10-08 09:31:10.198211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:25:18.704 [2024-10-08 09:31:10.198224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:18.704 [2024-10-08 09:31:10.198275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:18.704 [2024-10-08 09:31:10.198286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:18.704 [2024-10-08 09:31:10.198294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:25:18.704 [2024-10-08 09:31:10.198303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:18.704 [2024-10-08 09:31:10.198324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:18.704 [2024-10-08 09:31:10.201908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:18.704 [2024-10-08 09:31:10.201938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:18.704 [2024-10-08 09:31:10.201951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.587 ms 00:25:18.704 [2024-10-08 09:31:10.201958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:18.704 [2024-10-08 09:31:10.201984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:18.704 [2024-10-08 09:31:10.201992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:18.704 [2024-10-08 09:31:10.202001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:18.704 [2024-10-08 09:31:10.202011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:18.704 [2024-10-08 09:31:10.202029] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:18.704 [2024-10-08 09:31:10.202162] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:18.704 [2024-10-08 09:31:10.202176] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:18.704 [2024-10-08 09:31:10.202187] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:18.704 [2024-10-08 09:31:10.202201] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:18.704 [2024-10-08 09:31:10.202209] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:18.704 [2024-10-08 09:31:10.202219] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:18.704 [2024-10-08 09:31:10.202226] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:18.704 [2024-10-08 09:31:10.202234] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:18.704 [2024-10-08 09:31:10.202241] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:18.704 [2024-10-08 09:31:10.202251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:18.704 [2024-10-08 09:31:10.202258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:18.704 [2024-10-08 09:31:10.202267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.223 ms 00:25:18.704 [2024-10-08 09:31:10.202274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:18.704 [2024-10-08 09:31:10.202358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:18.704 [2024-10-08 09:31:10.202376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:18.704 [2024-10-08 09:31:10.202385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:25:18.704 [2024-10-08 09:31:10.202404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:18.704 [2024-10-08 09:31:10.202514] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:18.704 [2024-10-08 09:31:10.202530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:18.704 [2024-10-08 09:31:10.202541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:18.704 [2024-10-08 09:31:10.202548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:18.704 [2024-10-08 09:31:10.202557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:18.704 [2024-10-08 09:31:10.202564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:18.704 [2024-10-08 09:31:10.202572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:18.704 [2024-10-08 09:31:10.202579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:18.705 [2024-10-08 09:31:10.202587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:18.705 [2024-10-08 09:31:10.202593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:18.705 [2024-10-08 09:31:10.202601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:18.705 [2024-10-08 09:31:10.202608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:18.705 [2024-10-08 09:31:10.202616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:18.705 [2024-10-08 09:31:10.202623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:18.705 [2024-10-08 09:31:10.202631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:18.705 [2024-10-08 09:31:10.202637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:18.705 [2024-10-08 09:31:10.202647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:18.705 [2024-10-08 09:31:10.202654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:18.705 [2024-10-08 09:31:10.202661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:18.705 [2024-10-08 09:31:10.202668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:18.705 [2024-10-08 09:31:10.202678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:18.705 [2024-10-08 09:31:10.202684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:18.705 [2024-10-08 09:31:10.202692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:18.705 [2024-10-08 09:31:10.202699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:18.705 [2024-10-08 09:31:10.202707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:18.705 [2024-10-08 09:31:10.202713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:18.705 [2024-10-08 09:31:10.202724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:18.705 [2024-10-08 09:31:10.202730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:18.705 [2024-10-08 09:31:10.202738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:18.705 [2024-10-08 09:31:10.202745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:18.705 [2024-10-08 09:31:10.202752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:18.705 [2024-10-08 09:31:10.202759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:18.705 [2024-10-08 09:31:10.202768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:18.705 [2024-10-08 09:31:10.202774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:18.705 [2024-10-08 09:31:10.202782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:18.705 [2024-10-08 09:31:10.202789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:18.705 [2024-10-08 09:31:10.202798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:18.705 [2024-10-08 09:31:10.202804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:18.705 [2024-10-08 09:31:10.202812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:18.705 [2024-10-08 09:31:10.202818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:18.705 [2024-10-08 09:31:10.202827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:18.705 [2024-10-08 09:31:10.202833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:18.705 [2024-10-08 09:31:10.202841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:18.705 [2024-10-08 09:31:10.202847] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:18.705 [2024-10-08 09:31:10.202856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:18.705 [2024-10-08 09:31:10.202864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:18.705 [2024-10-08 09:31:10.202874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:18.705 [2024-10-08 09:31:10.202881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:18.705 [2024-10-08 09:31:10.202891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:18.705 [2024-10-08 09:31:10.202897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:18.705 [2024-10-08 09:31:10.202905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:18.705 [2024-10-08 09:31:10.202912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:18.705 [2024-10-08 09:31:10.202919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:18.705 [2024-10-08 09:31:10.202929] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:18.705 [2024-10-08 09:31:10.202940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:18.705 [2024-10-08 09:31:10.202948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:18.705 [2024-10-08 09:31:10.202957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:18.705 [2024-10-08 09:31:10.202964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:18.705 [2024-10-08 09:31:10.202972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:18.705 [2024-10-08 09:31:10.202979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:18.705 [2024-10-08 09:31:10.202987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:18.705 [2024-10-08 09:31:10.202994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:18.705 [2024-10-08 09:31:10.203003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:18.705 [2024-10-08 09:31:10.203009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:18.705 [2024-10-08 09:31:10.203020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:18.705 [2024-10-08 09:31:10.203027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:18.705 [2024-10-08 09:31:10.203035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:18.705 [2024-10-08 09:31:10.203042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:18.705 [2024-10-08 09:31:10.203052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:18.705 [2024-10-08 09:31:10.203059] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:18.705 [2024-10-08 09:31:10.203070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:18.705 [2024-10-08 09:31:10.203078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:18.705 [2024-10-08 09:31:10.203087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:18.705 [2024-10-08 09:31:10.203094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:18.705 [2024-10-08 09:31:10.203103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:18.705 [2024-10-08 09:31:10.203110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:18.705 [2024-10-08 09:31:10.203119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:18.705 [2024-10-08 09:31:10.203126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.666 ms 00:25:18.705 [2024-10-08 09:31:10.203135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:18.705 [2024-10-08 09:31:10.203174] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:18.705 [2024-10-08 09:31:10.203187] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:22.916 [2024-10-08 09:31:13.889596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:13.889682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:22.916 [2024-10-08 09:31:13.889700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3686.406 ms 00:25:22.916 [2024-10-08 09:31:13.889712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:13.922158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:13.922231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:22.916 [2024-10-08 09:31:13.922247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.187 ms 00:25:22.916 [2024-10-08 09:31:13.922258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:13.922354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:13.922368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:22.916 [2024-10-08 09:31:13.922378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:25:22.916 [2024-10-08 09:31:13.922410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:13.966969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:13.967041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:22.916 [2024-10-08 09:31:13.967063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.494 ms 00:25:22.916 [2024-10-08 09:31:13.967078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:13.967137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:13.967151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:22.916 [2024-10-08 09:31:13.967162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:22.916 [2024-10-08 09:31:13.967175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:13.967869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:13.967921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:22.916 [2024-10-08 09:31:13.967944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.616 ms 00:25:22.916 [2024-10-08 09:31:13.967960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:13.968019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:13.968033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:22.916 [2024-10-08 09:31:13.968044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:25:22.916 [2024-10-08 09:31:13.968059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:13.985863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:13.985918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:22.916 [2024-10-08 09:31:13.985930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.779 ms 00:25:22.916 [2024-10-08 09:31:13.985941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:13.999432] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:22.916 [2024-10-08 09:31:14.000812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:14.000856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:22.916 [2024-10-08 09:31:14.000870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.774 ms 00:25:22.916 [2024-10-08 09:31:14.000881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:14.031586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:14.031647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:22.916 [2024-10-08 09:31:14.031667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.667 ms 00:25:22.916 [2024-10-08 09:31:14.031676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:14.031788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:14.031800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:22.916 [2024-10-08 09:31:14.031815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:25:22.916 [2024-10-08 09:31:14.031824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:14.057046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:14.057105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:22.916 [2024-10-08 09:31:14.057121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.143 ms 00:25:22.916 [2024-10-08 09:31:14.057130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:14.082758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:14.082813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:22.916 [2024-10-08 09:31:14.082829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.568 ms 00:25:22.916 [2024-10-08 09:31:14.082836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:14.083486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:14.083523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:22.916 [2024-10-08 09:31:14.083536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.597 ms 00:25:22.916 [2024-10-08 09:31:14.083544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:14.171720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:14.171780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:22.916 [2024-10-08 09:31:14.171801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 88.112 ms 00:25:22.916 [2024-10-08 09:31:14.171813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:14.199317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.916 [2024-10-08 09:31:14.199375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:22.916 [2024-10-08 09:31:14.199402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.400 ms 00:25:22.916 [2024-10-08 09:31:14.199421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.916 [2024-10-08 09:31:14.225315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.917 [2024-10-08 09:31:14.225368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:22.917 [2024-10-08 09:31:14.225383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.837 ms 00:25:22.917 [2024-10-08 09:31:14.225400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.917 [2024-10-08 09:31:14.251835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.917 [2024-10-08 09:31:14.251892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:22.917 [2024-10-08 09:31:14.251911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.376 ms 00:25:22.917 [2024-10-08 09:31:14.251919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.917 [2024-10-08 09:31:14.251978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.917 [2024-10-08 09:31:14.251988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:22.917 [2024-10-08 09:31:14.252002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:22.917 [2024-10-08 09:31:14.252013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.917 [2024-10-08 09:31:14.252124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:22.917 [2024-10-08 09:31:14.252135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:22.917 [2024-10-08 09:31:14.252147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:25:22.917 [2024-10-08 09:31:14.252155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:22.917 [2024-10-08 09:31:14.253452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4062.283 ms, result 0 00:25:22.917 { 00:25:22.917 "name": "ftl", 00:25:22.917 "uuid": "a24f5ea0-0ee9-4699-bd85-513426d27a1f" 00:25:22.917 } 00:25:22.917 09:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:22.917 [2024-10-08 09:31:14.476452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.917 09:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:23.177 09:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:23.177 [2024-10-08 09:31:14.860832] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:23.438 09:31:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:23.438 [2024-10-08 09:31:15.061114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:23.438 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:24.010 Fill FTL, iteration 1 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=79505 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 79505 /var/tmp/spdk.tgt.sock 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 79505 ']' 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:24.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:24.010 09:31:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:24.010 [2024-10-08 09:31:15.465149] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:25:24.010 [2024-10-08 09:31:15.465271] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79505 ] 00:25:24.010 [2024-10-08 09:31:15.615851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.271 [2024-10-08 09:31:15.789535] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.843 09:31:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.843 09:31:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:24.843 09:31:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:25.104 ftln1 00:25:25.104 09:31:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:25.104 09:31:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:25.365 09:31:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:25.365 09:31:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 79505 00:25:25.365 09:31:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 79505 ']' 00:25:25.365 09:31:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 79505 00:25:25.365 09:31:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:25.365 09:31:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:25.365 09:31:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79505 00:25:25.365 09:31:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:25.365 killing process with pid 79505 00:25:25.365 09:31:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:25.365 09:31:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79505' 00:25:25.365 09:31:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 79505 00:25:25.365 09:31:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 79505 00:25:26.750 09:31:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:26.750 09:31:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:26.750 [2024-10-08 09:31:18.300779] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:25:26.750 [2024-10-08 09:31:18.301012] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79555 ] 00:25:27.011 [2024-10-08 09:31:18.454586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.011 [2024-10-08 09:31:18.594254] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.453  [2024-10-08T09:31:21.079Z] Copying: 250/1024 [MB] (250 MBps) [2024-10-08T09:31:22.023Z] Copying: 497/1024 [MB] (247 MBps) [2024-10-08T09:31:22.966Z] Copying: 738/1024 [MB] (241 MBps) [2024-10-08T09:31:23.227Z] Copying: 980/1024 [MB] (242 MBps) [2024-10-08T09:31:23.799Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:25:32.116 00:25:32.116 Calculate MD5 checksum, iteration 1 00:25:32.116 09:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:32.116 09:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:32.116 09:31:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:32.116 09:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:32.116 09:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:32.116 09:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:32.116 09:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:32.116 09:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:32.377 [2024-10-08 09:31:23.810405] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:25:32.378 [2024-10-08 09:31:23.810526] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79608 ] 00:25:32.378 [2024-10-08 09:31:23.956927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.638 [2024-10-08 09:31:24.101606] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.025  [2024-10-08T09:31:25.968Z] Copying: 676/1024 [MB] (676 MBps) [2024-10-08T09:31:26.909Z] Copying: 1024/1024 [MB] (average 664 MBps) 00:25:35.226 00:25:35.226 09:31:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:35.226 09:31:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:37.135 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:37.135 Fill FTL, iteration 2 00:25:37.135 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=76d5a46120684c226c2655ad465da8e4 00:25:37.135 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:37.135 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:37.135 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:37.135 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:37.135 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:37.135 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:37.135 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:37.135 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:37.135 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:37.135 [2024-10-08 09:31:28.715193] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:25:37.135 [2024-10-08 09:31:28.715283] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79668 ] 00:25:37.393 [2024-10-08 09:31:28.858693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.393 [2024-10-08 09:31:29.031615] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.771  [2024-10-08T09:31:31.395Z] Copying: 185/1024 [MB] (185 MBps) [2024-10-08T09:31:32.778Z] Copying: 364/1024 [MB] (179 MBps) [2024-10-08T09:31:33.720Z] Copying: 538/1024 [MB] (174 MBps) [2024-10-08T09:31:34.657Z] Copying: 783/1024 [MB] (245 MBps) [2024-10-08T09:31:35.223Z] Copying: 1024/1024 [MB] (average 205 MBps) 00:25:43.540 00:25:43.540 Calculate MD5 checksum, iteration 2 00:25:43.540 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:43.540 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:43.540 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:43.540 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:43.540 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:43.540 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:43.540 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:43.540 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:43.540 [2024-10-08 09:31:35.080598] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:25:43.540 [2024-10-08 09:31:35.080716] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79733 ] 00:25:43.798 [2024-10-08 09:31:35.229412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.798 [2024-10-08 09:31:35.369104] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.179  [2024-10-08T09:31:37.436Z] Copying: 634/1024 [MB] (634 MBps) [2024-10-08T09:31:38.371Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:25:46.688 00:25:46.688 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:46.688 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:49.219 09:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:49.219 09:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=315c0d1977d9a792de2fbbb5d6a2db0e 00:25:49.219 09:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:49.219 09:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:49.219 09:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:49.219 [2024-10-08 09:31:40.486592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.219 [2024-10-08 09:31:40.486648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:49.219 [2024-10-08 09:31:40.486661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:49.219 [2024-10-08 09:31:40.486672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.219 [2024-10-08 09:31:40.486692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.219 [2024-10-08 09:31:40.486700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:49.219 [2024-10-08 09:31:40.486707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:49.219 [2024-10-08 09:31:40.486713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.219 [2024-10-08 09:31:40.486730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.219 [2024-10-08 09:31:40.486737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:49.219 [2024-10-08 09:31:40.486744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:49.219 [2024-10-08 09:31:40.486750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.219 [2024-10-08 09:31:40.486804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.206 ms, result 0 00:25:49.219 true 00:25:49.219 09:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:49.219 { 00:25:49.219 "name": "ftl", 00:25:49.219 "properties": [ 00:25:49.219 { 00:25:49.219 "name": "superblock_version", 00:25:49.219 "value": 5, 00:25:49.219 "read-only": true 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "name": "base_device", 00:25:49.219 "bands": [ 00:25:49.219 { 00:25:49.219 "id": 0, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 1, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 2, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 3, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 4, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 5, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 6, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 7, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 8, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 9, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 10, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 11, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 12, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 13, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 14, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 15, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 16, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "id": 17, 00:25:49.219 "state": "FREE", 00:25:49.219 "validity": 0.0 00:25:49.219 } 00:25:49.219 ], 00:25:49.219 "read-only": true 00:25:49.219 }, 00:25:49.219 { 00:25:49.219 "name": "cache_device", 00:25:49.219 "type": "bdev", 00:25:49.219 "chunks": [ 00:25:49.219 { 00:25:49.219 "id": 0, 00:25:49.219 "state": "INACTIVE", 00:25:49.219 "utilization": 0.0 00:25:49.219 }, 00:25:49.219 { 00:25:49.220 "id": 1, 00:25:49.220 "state": "CLOSED", 00:25:49.220 "utilization": 1.0 00:25:49.220 }, 00:25:49.220 { 00:25:49.220 "id": 2, 00:25:49.220 "state": "CLOSED", 00:25:49.220 "utilization": 1.0 00:25:49.220 }, 00:25:49.220 { 00:25:49.220 "id": 3, 00:25:49.220 "state": "OPEN", 00:25:49.220 "utilization": 0.001953125 00:25:49.220 }, 00:25:49.220 { 00:25:49.220 "id": 4, 00:25:49.220 "state": "OPEN", 00:25:49.220 "utilization": 0.0 00:25:49.220 } 00:25:49.220 ], 00:25:49.220 "read-only": true 00:25:49.220 }, 00:25:49.220 { 00:25:49.220 "name": "verbose_mode", 00:25:49.220 "value": true, 00:25:49.220 "unit": "", 00:25:49.220 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:49.220 }, 00:25:49.220 { 00:25:49.220 "name": "prep_upgrade_on_shutdown", 00:25:49.220 "value": false, 00:25:49.220 "unit": "", 00:25:49.220 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:49.220 } 00:25:49.220 ] 00:25:49.220 } 00:25:49.220 09:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:49.220 [2024-10-08 09:31:40.822814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.220 [2024-10-08 09:31:40.822852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:49.220 [2024-10-08 09:31:40.822862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:49.220 [2024-10-08 09:31:40.822868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.220 [2024-10-08 09:31:40.822885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.220 [2024-10-08 09:31:40.822892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:49.220 [2024-10-08 09:31:40.822898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:49.220 [2024-10-08 09:31:40.822905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.220 [2024-10-08 09:31:40.822919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.220 [2024-10-08 09:31:40.822926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:49.220 [2024-10-08 09:31:40.822932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:49.220 [2024-10-08 09:31:40.822937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.220 [2024-10-08 09:31:40.822977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.154 ms, result 0 00:25:49.220 true 00:25:49.220 09:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:49.220 09:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:49.220 09:31:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:49.478 09:31:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:49.478 09:31:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:49.478 09:31:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:49.736 [2024-10-08 09:31:41.243129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.736 [2024-10-08 09:31:41.243157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:49.736 [2024-10-08 09:31:41.243165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:49.736 [2024-10-08 09:31:41.243171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.736 [2024-10-08 09:31:41.243187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.736 [2024-10-08 09:31:41.243193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:49.736 [2024-10-08 09:31:41.243199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:49.736 [2024-10-08 09:31:41.243205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.736 [2024-10-08 09:31:41.243219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.736 [2024-10-08 09:31:41.243226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:49.736 [2024-10-08 09:31:41.243231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:49.736 [2024-10-08 09:31:41.243236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.736 [2024-10-08 09:31:41.243274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.135 ms, result 0 00:25:49.736 true 00:25:49.736 09:31:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:49.736 { 00:25:49.736 "name": "ftl", 00:25:49.736 "properties": [ 00:25:49.736 { 00:25:49.736 "name": "superblock_version", 00:25:49.736 "value": 5, 00:25:49.736 "read-only": true 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "name": "base_device", 00:25:49.736 "bands": [ 00:25:49.736 { 00:25:49.736 "id": 0, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 1, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 2, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 3, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 4, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 5, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 6, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 7, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 8, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 9, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 10, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 11, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 12, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 13, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 14, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 15, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 16, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 17, 00:25:49.736 "state": "FREE", 00:25:49.736 "validity": 0.0 00:25:49.736 } 00:25:49.736 ], 00:25:49.736 "read-only": true 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "name": "cache_device", 00:25:49.736 "type": "bdev", 00:25:49.736 "chunks": [ 00:25:49.736 { 00:25:49.736 "id": 0, 00:25:49.736 "state": "INACTIVE", 00:25:49.736 "utilization": 0.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 1, 00:25:49.736 "state": "CLOSED", 00:25:49.736 "utilization": 1.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 2, 00:25:49.736 "state": "CLOSED", 00:25:49.736 "utilization": 1.0 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 3, 00:25:49.736 "state": "OPEN", 00:25:49.736 "utilization": 0.001953125 00:25:49.736 }, 00:25:49.736 { 00:25:49.736 "id": 4, 00:25:49.736 "state": "OPEN", 00:25:49.736 "utilization": 0.0 00:25:49.736 } 00:25:49.736 ], 00:25:49.737 "read-only": true 00:25:49.737 }, 00:25:49.737 { 00:25:49.737 "name": "verbose_mode", 00:25:49.737 "value": true, 00:25:49.737 "unit": "", 00:25:49.737 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:49.737 }, 00:25:49.737 { 00:25:49.737 "name": "prep_upgrade_on_shutdown", 00:25:49.737 "value": true, 00:25:49.737 "unit": "", 00:25:49.737 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:49.737 } 00:25:49.737 ] 00:25:49.737 } 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 79383 ]] 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 79383 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 79383 ']' 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 79383 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79383 00:25:49.995 killing process with pid 79383 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79383' 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 79383 00:25:49.995 09:31:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 79383 00:25:50.562 [2024-10-08 09:31:42.025938] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:50.562 [2024-10-08 09:31:42.038732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.562 [2024-10-08 09:31:42.038769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:50.562 [2024-10-08 09:31:42.038780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:50.562 [2024-10-08 09:31:42.038787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:50.562 [2024-10-08 09:31:42.038817] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:50.562 [2024-10-08 09:31:42.041046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:50.562 [2024-10-08 09:31:42.041076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:50.562 [2024-10-08 09:31:42.041085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.216 ms 00:25:50.562 [2024-10-08 09:31:42.041092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.564 [2024-10-08 09:31:50.694235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.564 [2024-10-08 09:31:50.694291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:00.564 [2024-10-08 09:31:50.694304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8653.095 ms 00:26:00.564 [2024-10-08 09:31:50.694311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.564 [2024-10-08 09:31:50.695374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.564 [2024-10-08 09:31:50.695402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:00.564 [2024-10-08 09:31:50.695411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.050 ms 00:26:00.564 [2024-10-08 09:31:50.695417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.564 [2024-10-08 09:31:50.696282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.564 [2024-10-08 09:31:50.696307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:00.564 [2024-10-08 09:31:50.696314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.839 ms 00:26:00.564 [2024-10-08 09:31:50.696320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.564 [2024-10-08 09:31:50.703670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.564 [2024-10-08 09:31:50.703698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:00.564 [2024-10-08 09:31:50.703705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.326 ms 00:26:00.564 [2024-10-08 09:31:50.703711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.564 [2024-10-08 09:31:50.708852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.564 [2024-10-08 09:31:50.708881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:00.564 [2024-10-08 09:31:50.708889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.115 ms 00:26:00.564 [2024-10-08 09:31:50.708900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.564 [2024-10-08 09:31:50.708963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.564 [2024-10-08 09:31:50.708971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:00.564 [2024-10-08 09:31:50.708978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:00.564 [2024-10-08 09:31:50.708984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.564 [2024-10-08 09:31:50.716213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.564 [2024-10-08 09:31:50.716239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:00.564 [2024-10-08 09:31:50.716246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.217 ms 00:26:00.564 [2024-10-08 09:31:50.716252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.564 [2024-10-08 09:31:50.723027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.564 [2024-10-08 09:31:50.723054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:00.564 [2024-10-08 09:31:50.723061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.749 ms 00:26:00.564 [2024-10-08 09:31:50.723066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.564 [2024-10-08 09:31:50.729966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.564 [2024-10-08 09:31:50.729992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:00.564 [2024-10-08 09:31:50.729999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.875 ms 00:26:00.564 [2024-10-08 09:31:50.730004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.564 [2024-10-08 09:31:50.737007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.564 [2024-10-08 09:31:50.737034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:00.564 [2024-10-08 09:31:50.737040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.956 ms 00:26:00.564 [2024-10-08 09:31:50.737046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.737069] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:00.565 [2024-10-08 09:31:50.737080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:00.565 [2024-10-08 09:31:50.737088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:00.565 [2024-10-08 09:31:50.737094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:00.565 [2024-10-08 09:31:50.737100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:00.565 [2024-10-08 09:31:50.737196] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:00.565 [2024-10-08 09:31:50.737202] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a24f5ea0-0ee9-4699-bd85-513426d27a1f 00:26:00.565 [2024-10-08 09:31:50.737208] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:00.565 [2024-10-08 09:31:50.737215] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:00.565 [2024-10-08 09:31:50.737220] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:00.565 [2024-10-08 09:31:50.737226] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:00.565 [2024-10-08 09:31:50.737234] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:00.565 [2024-10-08 09:31:50.737239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:00.565 [2024-10-08 09:31:50.737245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:00.565 [2024-10-08 09:31:50.737249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:00.565 [2024-10-08 09:31:50.737254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:00.565 [2024-10-08 09:31:50.737259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.565 [2024-10-08 09:31:50.737265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:00.565 [2024-10-08 09:31:50.737271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:26:00.565 [2024-10-08 09:31:50.737276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.747028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.565 [2024-10-08 09:31:50.747056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:00.565 [2024-10-08 09:31:50.747064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.736 ms 00:26:00.565 [2024-10-08 09:31:50.747070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.747337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.565 [2024-10-08 09:31:50.747353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:00.565 [2024-10-08 09:31:50.747359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.252 ms 00:26:00.565 [2024-10-08 09:31:50.747365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.776825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.565 [2024-10-08 09:31:50.776853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:00.565 [2024-10-08 09:31:50.776861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.565 [2024-10-08 09:31:50.776867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.776890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.565 [2024-10-08 09:31:50.776897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:00.565 [2024-10-08 09:31:50.776903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.565 [2024-10-08 09:31:50.776909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.776962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.565 [2024-10-08 09:31:50.776970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:00.565 [2024-10-08 09:31:50.776976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.565 [2024-10-08 09:31:50.776982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.776993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.565 [2024-10-08 09:31:50.776999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:00.565 [2024-10-08 09:31:50.777005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.565 [2024-10-08 09:31:50.777010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.837530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.565 [2024-10-08 09:31:50.837573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:00.565 [2024-10-08 09:31:50.837581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.565 [2024-10-08 09:31:50.837587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.886513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.565 [2024-10-08 09:31:50.886546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:00.565 [2024-10-08 09:31:50.886555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.565 [2024-10-08 09:31:50.886561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.886628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.565 [2024-10-08 09:31:50.886638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:00.565 [2024-10-08 09:31:50.886645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.565 [2024-10-08 09:31:50.886650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.886681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.565 [2024-10-08 09:31:50.886688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:00.565 [2024-10-08 09:31:50.886694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.565 [2024-10-08 09:31:50.886700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.886768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.565 [2024-10-08 09:31:50.886776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:00.565 [2024-10-08 09:31:50.886784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.565 [2024-10-08 09:31:50.886790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.886813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.565 [2024-10-08 09:31:50.886820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:00.565 [2024-10-08 09:31:50.886826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.565 [2024-10-08 09:31:50.886832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.886859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.565 [2024-10-08 09:31:50.886866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:00.565 [2024-10-08 09:31:50.886875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.565 [2024-10-08 09:31:50.886880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.886916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:00.565 [2024-10-08 09:31:50.886929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:00.565 [2024-10-08 09:31:50.886936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:00.565 [2024-10-08 09:31:50.886941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.565 [2024-10-08 09:31:50.887037] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8848.260 ms, result 0 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79931 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79931 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 79931 ']' 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:03.113 09:31:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:03.373 [2024-10-08 09:31:54.867168] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:26:03.373 [2024-10-08 09:31:54.867291] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79931 ] 00:26:03.373 [2024-10-08 09:31:55.015087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.634 [2024-10-08 09:31:55.163523] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.206 [2024-10-08 09:31:55.738037] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:04.206 [2024-10-08 09:31:55.738090] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:04.206 [2024-10-08 09:31:55.885306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.206 [2024-10-08 09:31:55.885357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:04.206 [2024-10-08 09:31:55.885373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:04.206 [2024-10-08 09:31:55.885381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.206 [2024-10-08 09:31:55.885444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.206 [2024-10-08 09:31:55.885454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:04.206 [2024-10-08 09:31:55.885462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:04.206 [2024-10-08 09:31:55.885469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.206 [2024-10-08 09:31:55.885496] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:04.206 [2024-10-08 09:31:55.886222] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:04.206 [2024-10-08 09:31:55.886252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.206 [2024-10-08 09:31:55.886259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:04.206 [2024-10-08 09:31:55.886268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.765 ms 00:26:04.206 [2024-10-08 09:31:55.886278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.206 [2024-10-08 09:31:55.887474] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:04.467 [2024-10-08 09:31:55.900200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.467 [2024-10-08 09:31:55.900239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:04.467 [2024-10-08 09:31:55.900250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.728 ms 00:26:04.467 [2024-10-08 09:31:55.900258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.467 [2024-10-08 09:31:55.900315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.467 [2024-10-08 09:31:55.900324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:04.467 [2024-10-08 09:31:55.900333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:04.467 [2024-10-08 09:31:55.900340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.467 [2024-10-08 09:31:55.905849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.467 [2024-10-08 09:31:55.905879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:04.467 [2024-10-08 09:31:55.905888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.439 ms 00:26:04.467 [2024-10-08 09:31:55.905895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.467 [2024-10-08 09:31:55.905954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.467 [2024-10-08 09:31:55.905963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:04.467 [2024-10-08 09:31:55.905971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:04.467 [2024-10-08 09:31:55.905980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.467 [2024-10-08 09:31:55.906035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.467 [2024-10-08 09:31:55.906045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:04.467 [2024-10-08 09:31:55.906053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:04.467 [2024-10-08 09:31:55.906060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.467 [2024-10-08 09:31:55.906082] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:04.467 [2024-10-08 09:31:55.909662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.467 [2024-10-08 09:31:55.909692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:04.467 [2024-10-08 09:31:55.909701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.584 ms 00:26:04.467 [2024-10-08 09:31:55.909708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.467 [2024-10-08 09:31:55.909732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.467 [2024-10-08 09:31:55.909740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:04.467 [2024-10-08 09:31:55.909751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:04.467 [2024-10-08 09:31:55.909758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.467 [2024-10-08 09:31:55.909780] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:04.467 [2024-10-08 09:31:55.909798] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:04.467 [2024-10-08 09:31:55.909831] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:04.467 [2024-10-08 09:31:55.909846] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:04.467 [2024-10-08 09:31:55.909948] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:04.467 [2024-10-08 09:31:55.909961] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:04.467 [2024-10-08 09:31:55.909971] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:04.467 [2024-10-08 09:31:55.909980] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:04.467 [2024-10-08 09:31:55.909989] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:04.467 [2024-10-08 09:31:55.909997] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:04.467 [2024-10-08 09:31:55.910004] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:04.467 [2024-10-08 09:31:55.910011] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:04.467 [2024-10-08 09:31:55.910018] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:04.467 [2024-10-08 09:31:55.910025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.467 [2024-10-08 09:31:55.910032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:04.467 [2024-10-08 09:31:55.910040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.247 ms 00:26:04.467 [2024-10-08 09:31:55.910048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.467 [2024-10-08 09:31:55.910132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.467 [2024-10-08 09:31:55.910151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:04.467 [2024-10-08 09:31:55.910159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:26:04.467 [2024-10-08 09:31:55.910166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.467 [2024-10-08 09:31:55.910279] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:04.467 [2024-10-08 09:31:55.910295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:04.467 [2024-10-08 09:31:55.910303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:04.467 [2024-10-08 09:31:55.910310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.467 [2024-10-08 09:31:55.910320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:04.467 [2024-10-08 09:31:55.910327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:04.467 [2024-10-08 09:31:55.910334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:04.467 [2024-10-08 09:31:55.910341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:04.468 [2024-10-08 09:31:55.910347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:04.468 [2024-10-08 09:31:55.910353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.468 [2024-10-08 09:31:55.910360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:04.468 [2024-10-08 09:31:55.910366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:04.468 [2024-10-08 09:31:55.910377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.468 [2024-10-08 09:31:55.910384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:04.468 [2024-10-08 09:31:55.910402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:04.468 [2024-10-08 09:31:55.910409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.468 [2024-10-08 09:31:55.910415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:04.468 [2024-10-08 09:31:55.910422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:04.468 [2024-10-08 09:31:55.910429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.468 [2024-10-08 09:31:55.910436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:04.468 [2024-10-08 09:31:55.910442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:04.468 [2024-10-08 09:31:55.910449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:04.468 [2024-10-08 09:31:55.910456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:04.468 [2024-10-08 09:31:55.910462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:04.468 [2024-10-08 09:31:55.910474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:04.468 [2024-10-08 09:31:55.910481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:04.468 [2024-10-08 09:31:55.910491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:04.468 [2024-10-08 09:31:55.910498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:04.468 [2024-10-08 09:31:55.910504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:04.468 [2024-10-08 09:31:55.910511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:04.468 [2024-10-08 09:31:55.910517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:04.468 [2024-10-08 09:31:55.910524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:04.468 [2024-10-08 09:31:55.910534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:04.468 [2024-10-08 09:31:55.910540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.468 [2024-10-08 09:31:55.910547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:04.468 [2024-10-08 09:31:55.910553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:04.468 [2024-10-08 09:31:55.910564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.468 [2024-10-08 09:31:55.910574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:04.468 [2024-10-08 09:31:55.910584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:04.468 [2024-10-08 09:31:55.910591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.468 [2024-10-08 09:31:55.910597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:04.468 [2024-10-08 09:31:55.910603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:04.468 [2024-10-08 09:31:55.910610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.468 [2024-10-08 09:31:55.910616] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:04.468 [2024-10-08 09:31:55.910627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:04.468 [2024-10-08 09:31:55.910634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:04.468 [2024-10-08 09:31:55.910641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.468 [2024-10-08 09:31:55.910648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:04.468 [2024-10-08 09:31:55.910655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:04.468 [2024-10-08 09:31:55.910662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:04.468 [2024-10-08 09:31:55.910669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:04.468 [2024-10-08 09:31:55.910675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:04.468 [2024-10-08 09:31:55.910681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:04.468 [2024-10-08 09:31:55.910689] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:04.468 [2024-10-08 09:31:55.910698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:04.468 [2024-10-08 09:31:55.910706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:04.468 [2024-10-08 09:31:55.910713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:04.468 [2024-10-08 09:31:55.910720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:04.468 [2024-10-08 09:31:55.910727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:04.468 [2024-10-08 09:31:55.910734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:04.468 [2024-10-08 09:31:55.910740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:04.468 [2024-10-08 09:31:55.910747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:04.468 [2024-10-08 09:31:55.910754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:04.468 [2024-10-08 09:31:55.910761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:04.468 [2024-10-08 09:31:55.910767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:04.468 [2024-10-08 09:31:55.910774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:04.468 [2024-10-08 09:31:55.910781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:04.468 [2024-10-08 09:31:55.910788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:04.468 [2024-10-08 09:31:55.910795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:04.468 [2024-10-08 09:31:55.910802] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:04.468 [2024-10-08 09:31:55.910814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:04.468 [2024-10-08 09:31:55.910822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:04.468 [2024-10-08 09:31:55.910833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:04.468 [2024-10-08 09:31:55.910840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:04.468 [2024-10-08 09:31:55.910847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:04.468 [2024-10-08 09:31:55.910858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.468 [2024-10-08 09:31:55.910869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:04.468 [2024-10-08 09:31:55.910876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.647 ms 00:26:04.468 [2024-10-08 09:31:55.910892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.468 [2024-10-08 09:31:55.910939] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:04.468 [2024-10-08 09:31:55.910951] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:08.676 [2024-10-08 09:31:59.993554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:31:59.993635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:08.676 [2024-10-08 09:31:59.993655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4082.600 ms 00:26:08.676 [2024-10-08 09:31:59.993673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.026883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.026956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:08.676 [2024-10-08 09:32:00.026972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.953 ms 00:26:08.676 [2024-10-08 09:32:00.026981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.027097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.027108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:08.676 [2024-10-08 09:32:00.027118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:08.676 [2024-10-08 09:32:00.027127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.077049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.077119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:08.676 [2024-10-08 09:32:00.077136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.879 ms 00:26:08.676 [2024-10-08 09:32:00.077146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.077204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.077214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:08.676 [2024-10-08 09:32:00.077224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:08.676 [2024-10-08 09:32:00.077232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.077906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.077933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:08.676 [2024-10-08 09:32:00.077945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.589 ms 00:26:08.676 [2024-10-08 09:32:00.077954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.078019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.078030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:08.676 [2024-10-08 09:32:00.078039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:08.676 [2024-10-08 09:32:00.078047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.095764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.095814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:08.676 [2024-10-08 09:32:00.095827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.693 ms 00:26:08.676 [2024-10-08 09:32:00.095837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.111362] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:08.676 [2024-10-08 09:32:00.111449] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:08.676 [2024-10-08 09:32:00.111466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.111477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:08.676 [2024-10-08 09:32:00.111489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.455 ms 00:26:08.676 [2024-10-08 09:32:00.111497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.126785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.126846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:08.676 [2024-10-08 09:32:00.126860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.219 ms 00:26:08.676 [2024-10-08 09:32:00.126868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.139766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.139829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:08.676 [2024-10-08 09:32:00.139842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.826 ms 00:26:08.676 [2024-10-08 09:32:00.139851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.152806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.152859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:08.676 [2024-10-08 09:32:00.152872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.899 ms 00:26:08.676 [2024-10-08 09:32:00.152879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.153607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.153636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:08.676 [2024-10-08 09:32:00.153648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.596 ms 00:26:08.676 [2024-10-08 09:32:00.153655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.222054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.222122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:08.676 [2024-10-08 09:32:00.222138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 68.374 ms 00:26:08.676 [2024-10-08 09:32:00.222148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.233500] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:08.676 [2024-10-08 09:32:00.234660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.234700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:08.676 [2024-10-08 09:32:00.234719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.450 ms 00:26:08.676 [2024-10-08 09:32:00.234727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.234838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.234849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:08.676 [2024-10-08 09:32:00.234859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:08.676 [2024-10-08 09:32:00.234869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.234938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.234949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:08.676 [2024-10-08 09:32:00.234959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:08.676 [2024-10-08 09:32:00.234970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.234994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.235003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:08.676 [2024-10-08 09:32:00.235013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:08.676 [2024-10-08 09:32:00.235021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.235057] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:08.676 [2024-10-08 09:32:00.235069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.235077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:08.676 [2024-10-08 09:32:00.235086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:08.676 [2024-10-08 09:32:00.235094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.261055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.261115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:08.676 [2024-10-08 09:32:00.261129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.935 ms 00:26:08.676 [2024-10-08 09:32:00.261137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.261238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.676 [2024-10-08 09:32:00.261250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:08.676 [2024-10-08 09:32:00.261260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:26:08.676 [2024-10-08 09:32:00.261272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.676 [2024-10-08 09:32:00.263376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4377.548 ms, result 0 00:26:08.676 [2024-10-08 09:32:00.277513] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.676 [2024-10-08 09:32:00.293529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:08.676 [2024-10-08 09:32:00.301865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:09.248 09:32:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:09.248 09:32:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:09.248 09:32:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:09.248 09:32:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:09.248 09:32:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:09.509 [2024-10-08 09:32:01.062464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.509 [2024-10-08 09:32:01.062526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:09.509 [2024-10-08 09:32:01.062542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:09.509 [2024-10-08 09:32:01.062551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.509 [2024-10-08 09:32:01.062577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.509 [2024-10-08 09:32:01.062587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:09.509 [2024-10-08 09:32:01.062596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:09.509 [2024-10-08 09:32:01.062605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.509 [2024-10-08 09:32:01.062626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.509 [2024-10-08 09:32:01.062639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:09.509 [2024-10-08 09:32:01.062648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:09.509 [2024-10-08 09:32:01.062657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.509 [2024-10-08 09:32:01.062719] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.251 ms, result 0 00:26:09.509 true 00:26:09.509 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:09.771 { 00:26:09.771 "name": "ftl", 00:26:09.771 "properties": [ 00:26:09.771 { 00:26:09.771 "name": "superblock_version", 00:26:09.771 "value": 5, 00:26:09.771 "read-only": true 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "name": "base_device", 00:26:09.771 "bands": [ 00:26:09.771 { 00:26:09.771 "id": 0, 00:26:09.771 "state": "CLOSED", 00:26:09.771 "validity": 1.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 1, 00:26:09.771 "state": "CLOSED", 00:26:09.771 "validity": 1.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 2, 00:26:09.771 "state": "CLOSED", 00:26:09.771 "validity": 0.007843137254901933 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 3, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 4, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 5, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 6, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 7, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 8, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 9, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 10, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 11, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 12, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 13, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 14, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 15, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 16, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 17, 00:26:09.771 "state": "FREE", 00:26:09.771 "validity": 0.0 00:26:09.771 } 00:26:09.771 ], 00:26:09.771 "read-only": true 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "name": "cache_device", 00:26:09.771 "type": "bdev", 00:26:09.771 "chunks": [ 00:26:09.771 { 00:26:09.771 "id": 0, 00:26:09.771 "state": "INACTIVE", 00:26:09.771 "utilization": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 1, 00:26:09.771 "state": "OPEN", 00:26:09.771 "utilization": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 2, 00:26:09.771 "state": "OPEN", 00:26:09.771 "utilization": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 3, 00:26:09.771 "state": "FREE", 00:26:09.771 "utilization": 0.0 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "id": 4, 00:26:09.771 "state": "FREE", 00:26:09.771 "utilization": 0.0 00:26:09.771 } 00:26:09.771 ], 00:26:09.771 "read-only": true 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "name": "verbose_mode", 00:26:09.771 "value": true, 00:26:09.771 "unit": "", 00:26:09.771 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:09.771 }, 00:26:09.771 { 00:26:09.771 "name": "prep_upgrade_on_shutdown", 00:26:09.771 "value": false, 00:26:09.771 "unit": "", 00:26:09.771 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:09.771 } 00:26:09.771 ] 00:26:09.771 } 00:26:09.771 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:09.771 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:09.771 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:10.033 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:10.033 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:10.033 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:10.033 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:10.033 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:10.294 Validate MD5 checksum, iteration 1 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:10.294 09:32:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:10.294 [2024-10-08 09:32:01.806383] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:26:10.294 [2024-10-08 09:32:01.806546] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80024 ] 00:26:10.294 [2024-10-08 09:32:01.960897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.558 [2024-10-08 09:32:02.216296] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.488  [2024-10-08T09:32:04.740Z] Copying: 578/1024 [MB] (578 MBps) [2024-10-08T09:32:06.119Z] Copying: 1024/1024 [MB] (average 555 MBps) 00:26:14.436 00:26:14.436 09:32:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:14.436 09:32:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:16.444 09:32:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:16.444 09:32:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=76d5a46120684c226c2655ad465da8e4 00:26:16.444 Validate MD5 checksum, iteration 2 00:26:16.444 09:32:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 76d5a46120684c226c2655ad465da8e4 != \7\6\d\5\a\4\6\1\2\0\6\8\4\c\2\2\6\c\2\6\5\5\a\d\4\6\5\d\a\8\e\4 ]] 00:26:16.444 09:32:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:16.444 09:32:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:16.444 09:32:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:16.444 09:32:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:16.444 09:32:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:16.444 09:32:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:16.444 09:32:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:16.444 09:32:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:16.444 09:32:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:16.708 [2024-10-08 09:32:08.167189] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:26:16.708 [2024-10-08 09:32:08.167497] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80091 ] 00:26:16.708 [2024-10-08 09:32:08.315137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.966 [2024-10-08 09:32:08.482114] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.341  [2024-10-08T09:32:10.593Z] Copying: 714/1024 [MB] (714 MBps) [2024-10-08T09:32:11.530Z] Copying: 1024/1024 [MB] (average 688 MBps) 00:26:19.847 00:26:19.847 09:32:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:19.847 09:32:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=315c0d1977d9a792de2fbbb5d6a2db0e 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 315c0d1977d9a792de2fbbb5d6a2db0e != \3\1\5\c\0\d\1\9\7\7\d\9\a\7\9\2\d\e\2\f\b\b\b\5\d\6\a\2\d\b\0\e ]] 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 79931 ]] 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 79931 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:22.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80152 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80152 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80152 ']' 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:22.394 09:32:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:22.394 [2024-10-08 09:32:13.546374] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:26:22.394 [2024-10-08 09:32:13.546582] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80152 ] 00:26:22.394 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 79931 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:22.394 [2024-10-08 09:32:13.689382] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.394 [2024-10-08 09:32:13.829289] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.974 [2024-10-08 09:32:14.400567] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:22.974 [2024-10-08 09:32:14.400769] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:22.974 [2024-10-08 09:32:14.543376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.974 [2024-10-08 09:32:14.543418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:22.974 [2024-10-08 09:32:14.543431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:22.974 [2024-10-08 09:32:14.543437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.974 [2024-10-08 09:32:14.543480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.974 [2024-10-08 09:32:14.543489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:22.974 [2024-10-08 09:32:14.543496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:22.974 [2024-10-08 09:32:14.543501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.974 [2024-10-08 09:32:14.543520] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:22.974 [2024-10-08 09:32:14.544020] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:22.974 [2024-10-08 09:32:14.544036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.974 [2024-10-08 09:32:14.544042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:22.974 [2024-10-08 09:32:14.544048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.524 ms 00:26:22.974 [2024-10-08 09:32:14.544056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.974 [2024-10-08 09:32:14.544311] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:22.974 [2024-10-08 09:32:14.556743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.974 [2024-10-08 09:32:14.556771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:22.974 [2024-10-08 09:32:14.556780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.433 ms 00:26:22.974 [2024-10-08 09:32:14.556791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.974 [2024-10-08 09:32:14.563560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.974 [2024-10-08 09:32:14.563672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:22.974 [2024-10-08 09:32:14.563684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:22.974 [2024-10-08 09:32:14.563691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.974 [2024-10-08 09:32:14.563931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.974 [2024-10-08 09:32:14.563948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:22.974 [2024-10-08 09:32:14.563955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:26:22.974 [2024-10-08 09:32:14.563960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.974 [2024-10-08 09:32:14.563998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.974 [2024-10-08 09:32:14.564004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:22.974 [2024-10-08 09:32:14.564010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:22.974 [2024-10-08 09:32:14.564016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.974 [2024-10-08 09:32:14.564036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.974 [2024-10-08 09:32:14.564043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:22.974 [2024-10-08 09:32:14.564049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:22.974 [2024-10-08 09:32:14.564056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.974 [2024-10-08 09:32:14.564071] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:22.974 [2024-10-08 09:32:14.566243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.974 [2024-10-08 09:32:14.566265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:22.974 [2024-10-08 09:32:14.566272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.175 ms 00:26:22.974 [2024-10-08 09:32:14.566277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.974 [2024-10-08 09:32:14.566295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.974 [2024-10-08 09:32:14.566301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:22.974 [2024-10-08 09:32:14.566307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:22.974 [2024-10-08 09:32:14.566312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.974 [2024-10-08 09:32:14.566328] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:22.974 [2024-10-08 09:32:14.566341] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:22.974 [2024-10-08 09:32:14.566368] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:22.974 [2024-10-08 09:32:14.566381] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:22.974 [2024-10-08 09:32:14.566467] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:22.974 [2024-10-08 09:32:14.566476] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:22.974 [2024-10-08 09:32:14.566484] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:22.974 [2024-10-08 09:32:14.566492] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:22.974 [2024-10-08 09:32:14.566498] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:22.974 [2024-10-08 09:32:14.566504] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:22.974 [2024-10-08 09:32:14.566512] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:22.974 [2024-10-08 09:32:14.566517] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:22.974 [2024-10-08 09:32:14.566522] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:22.974 [2024-10-08 09:32:14.566527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.974 [2024-10-08 09:32:14.566533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:22.974 [2024-10-08 09:32:14.566539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:26:22.974 [2024-10-08 09:32:14.566545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.974 [2024-10-08 09:32:14.566609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.974 [2024-10-08 09:32:14.566615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:22.974 [2024-10-08 09:32:14.566620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:26:22.974 [2024-10-08 09:32:14.566628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.974 [2024-10-08 09:32:14.566702] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:22.974 [2024-10-08 09:32:14.566714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:22.974 [2024-10-08 09:32:14.566721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:22.974 [2024-10-08 09:32:14.566727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.974 [2024-10-08 09:32:14.566733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:22.974 [2024-10-08 09:32:14.566739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:22.974 [2024-10-08 09:32:14.566744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:22.974 [2024-10-08 09:32:14.566749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:22.974 [2024-10-08 09:32:14.566754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:22.974 [2024-10-08 09:32:14.566759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.974 [2024-10-08 09:32:14.566764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:22.974 [2024-10-08 09:32:14.566768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:22.974 [2024-10-08 09:32:14.566773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.974 [2024-10-08 09:32:14.566783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:22.974 [2024-10-08 09:32:14.566788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:22.974 [2024-10-08 09:32:14.566793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.974 [2024-10-08 09:32:14.566797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:22.974 [2024-10-08 09:32:14.566802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:22.974 [2024-10-08 09:32:14.566807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.974 [2024-10-08 09:32:14.566812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:22.974 [2024-10-08 09:32:14.566817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:22.974 [2024-10-08 09:32:14.566822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:22.974 [2024-10-08 09:32:14.566831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:22.974 [2024-10-08 09:32:14.566837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:22.974 [2024-10-08 09:32:14.566841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:22.974 [2024-10-08 09:32:14.566846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:22.974 [2024-10-08 09:32:14.566851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:22.974 [2024-10-08 09:32:14.566857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:22.974 [2024-10-08 09:32:14.566862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:22.974 [2024-10-08 09:32:14.566866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:22.974 [2024-10-08 09:32:14.566871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:22.974 [2024-10-08 09:32:14.566876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:22.974 [2024-10-08 09:32:14.566881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:22.975 [2024-10-08 09:32:14.566886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.975 [2024-10-08 09:32:14.566891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:22.975 [2024-10-08 09:32:14.566896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:22.975 [2024-10-08 09:32:14.566901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.975 [2024-10-08 09:32:14.566906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:22.975 [2024-10-08 09:32:14.566911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:22.975 [2024-10-08 09:32:14.566916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.975 [2024-10-08 09:32:14.566921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:22.975 [2024-10-08 09:32:14.566926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:22.975 [2024-10-08 09:32:14.566931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.975 [2024-10-08 09:32:14.566936] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:22.975 [2024-10-08 09:32:14.566941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:22.975 [2024-10-08 09:32:14.566949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:22.975 [2024-10-08 09:32:14.566954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.975 [2024-10-08 09:32:14.566960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:22.975 [2024-10-08 09:32:14.566965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:22.975 [2024-10-08 09:32:14.566970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:22.975 [2024-10-08 09:32:14.566975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:22.975 [2024-10-08 09:32:14.566980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:22.975 [2024-10-08 09:32:14.566985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:22.975 [2024-10-08 09:32:14.566992] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:22.975 [2024-10-08 09:32:14.567000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:22.975 [2024-10-08 09:32:14.567007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:22.975 [2024-10-08 09:32:14.567012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:22.975 [2024-10-08 09:32:14.567018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:22.975 [2024-10-08 09:32:14.567023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:22.975 [2024-10-08 09:32:14.567028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:22.975 [2024-10-08 09:32:14.567034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:22.975 [2024-10-08 09:32:14.567039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:22.975 [2024-10-08 09:32:14.567044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:22.975 [2024-10-08 09:32:14.567049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:22.975 [2024-10-08 09:32:14.567055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:22.975 [2024-10-08 09:32:14.567061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:22.975 [2024-10-08 09:32:14.567066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:22.975 [2024-10-08 09:32:14.567071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:22.975 [2024-10-08 09:32:14.567077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:22.975 [2024-10-08 09:32:14.567082] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:22.975 [2024-10-08 09:32:14.567088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:22.975 [2024-10-08 09:32:14.567094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:22.975 [2024-10-08 09:32:14.567100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:22.975 [2024-10-08 09:32:14.567106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:22.975 [2024-10-08 09:32:14.567111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:22.975 [2024-10-08 09:32:14.567117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.975 [2024-10-08 09:32:14.567123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:22.975 [2024-10-08 09:32:14.567130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.467 ms 00:26:22.975 [2024-10-08 09:32:14.567136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.975 [2024-10-08 09:32:14.586358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.975 [2024-10-08 09:32:14.586382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:22.975 [2024-10-08 09:32:14.586398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.185 ms 00:26:22.975 [2024-10-08 09:32:14.586404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.975 [2024-10-08 09:32:14.586441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.975 [2024-10-08 09:32:14.586449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:22.975 [2024-10-08 09:32:14.586455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:22.975 [2024-10-08 09:32:14.586463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.975 [2024-10-08 09:32:14.630330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.975 [2024-10-08 09:32:14.630361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:22.975 [2024-10-08 09:32:14.630370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.826 ms 00:26:22.975 [2024-10-08 09:32:14.630377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.975 [2024-10-08 09:32:14.630417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.975 [2024-10-08 09:32:14.630424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:22.975 [2024-10-08 09:32:14.630431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:22.975 [2024-10-08 09:32:14.630436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.975 [2024-10-08 09:32:14.630512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.975 [2024-10-08 09:32:14.630521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:22.975 [2024-10-08 09:32:14.630527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:22.975 [2024-10-08 09:32:14.630534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.975 [2024-10-08 09:32:14.630582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.975 [2024-10-08 09:32:14.630590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:22.975 [2024-10-08 09:32:14.630597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:22.975 [2024-10-08 09:32:14.630602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.975 [2024-10-08 09:32:14.641317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.975 [2024-10-08 09:32:14.641344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:22.975 [2024-10-08 09:32:14.641352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.699 ms 00:26:22.975 [2024-10-08 09:32:14.641358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.975 [2024-10-08 09:32:14.641458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.975 [2024-10-08 09:32:14.641467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:22.975 [2024-10-08 09:32:14.641474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:22.975 [2024-10-08 09:32:14.641479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.975 [2024-10-08 09:32:14.654691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.975 [2024-10-08 09:32:14.654718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:22.975 [2024-10-08 09:32:14.654727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.196 ms 00:26:22.975 [2024-10-08 09:32:14.654733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.236 [2024-10-08 09:32:14.661770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.236 [2024-10-08 09:32:14.661796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:23.236 [2024-10-08 09:32:14.661804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.418 ms 00:26:23.236 [2024-10-08 09:32:14.661810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.236 [2024-10-08 09:32:14.705669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.236 [2024-10-08 09:32:14.705705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:23.236 [2024-10-08 09:32:14.705714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.818 ms 00:26:23.236 [2024-10-08 09:32:14.705721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.236 [2024-10-08 09:32:14.705819] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:23.236 [2024-10-08 09:32:14.705894] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:23.236 [2024-10-08 09:32:14.705964] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:23.236 [2024-10-08 09:32:14.706034] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:23.236 [2024-10-08 09:32:14.706044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.236 [2024-10-08 09:32:14.706050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:23.236 [2024-10-08 09:32:14.706057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.291 ms 00:26:23.236 [2024-10-08 09:32:14.706065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.236 [2024-10-08 09:32:14.706107] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:23.236 [2024-10-08 09:32:14.706115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.236 [2024-10-08 09:32:14.706121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:23.236 [2024-10-08 09:32:14.706127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:23.236 [2024-10-08 09:32:14.706133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.236 [2024-10-08 09:32:14.717536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.236 [2024-10-08 09:32:14.717563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:23.236 [2024-10-08 09:32:14.717570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.387 ms 00:26:23.236 [2024-10-08 09:32:14.717577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.236 [2024-10-08 09:32:14.724093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.236 [2024-10-08 09:32:14.724117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:23.236 [2024-10-08 09:32:14.724125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:23.236 [2024-10-08 09:32:14.724133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.236 [2024-10-08 09:32:14.724191] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:23.236 [2024-10-08 09:32:14.724302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.236 [2024-10-08 09:32:14.724310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:23.236 [2024-10-08 09:32:14.724316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.113 ms 00:26:23.236 [2024-10-08 09:32:14.724322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.180 [2024-10-08 09:32:15.512354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.180 [2024-10-08 09:32:15.512465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:24.180 [2024-10-08 09:32:15.512484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 787.350 ms 00:26:24.180 [2024-10-08 09:32:15.512495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.180 [2024-10-08 09:32:15.517472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.180 [2024-10-08 09:32:15.517535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:24.180 [2024-10-08 09:32:15.517547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.716 ms 00:26:24.180 [2024-10-08 09:32:15.517557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.180 [2024-10-08 09:32:15.518579] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:24.180 [2024-10-08 09:32:15.518776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.180 [2024-10-08 09:32:15.518794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:24.180 [2024-10-08 09:32:15.518805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.181 ms 00:26:24.180 [2024-10-08 09:32:15.518814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.180 [2024-10-08 09:32:15.518864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.180 [2024-10-08 09:32:15.518875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:24.180 [2024-10-08 09:32:15.518885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:24.180 [2024-10-08 09:32:15.518894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.180 [2024-10-08 09:32:15.518934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 794.734 ms, result 0 00:26:24.180 [2024-10-08 09:32:15.518980] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:24.180 [2024-10-08 09:32:15.519113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.180 [2024-10-08 09:32:15.519128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:24.180 [2024-10-08 09:32:15.519138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.134 ms 00:26:24.180 [2024-10-08 09:32:15.519145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.441 [2024-10-08 09:32:16.123287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.441 [2024-10-08 09:32:16.123343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:24.441 [2024-10-08 09:32:16.123354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 602.925 ms 00:26:24.441 [2024-10-08 09:32:16.123360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.126896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.126928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:24.703 [2024-10-08 09:32:16.126937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.937 ms 00:26:24.703 [2024-10-08 09:32:16.126944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.127420] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:24.703 [2024-10-08 09:32:16.127460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.127467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:24.703 [2024-10-08 09:32:16.127474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.493 ms 00:26:24.703 [2024-10-08 09:32:16.127479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.127752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.127786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:24.703 [2024-10-08 09:32:16.127795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:24.703 [2024-10-08 09:32:16.127801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.127842] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 608.862 ms, result 0 00:26:24.703 [2024-10-08 09:32:16.127878] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:24.703 [2024-10-08 09:32:16.127887] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:24.703 [2024-10-08 09:32:16.127895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.127901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:24.703 [2024-10-08 09:32:16.127911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1403.714 ms 00:26:24.703 [2024-10-08 09:32:16.127917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.127940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.127947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:24.703 [2024-10-08 09:32:16.127953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:24.703 [2024-10-08 09:32:16.127959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.136809] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:24.703 [2024-10-08 09:32:16.136917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.136926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:24.703 [2024-10-08 09:32:16.136933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.945 ms 00:26:24.703 [2024-10-08 09:32:16.136939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.137477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.137493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:24.703 [2024-10-08 09:32:16.137500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.475 ms 00:26:24.703 [2024-10-08 09:32:16.137506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.139201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.139221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:24.703 [2024-10-08 09:32:16.139229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.681 ms 00:26:24.703 [2024-10-08 09:32:16.139237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.139271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.139278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:24.703 [2024-10-08 09:32:16.139285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:24.703 [2024-10-08 09:32:16.139291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.139371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.139378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:24.703 [2024-10-08 09:32:16.139384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:24.703 [2024-10-08 09:32:16.139412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.139429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.139438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:24.703 [2024-10-08 09:32:16.139454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:24.703 [2024-10-08 09:32:16.139460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.139484] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:24.703 [2024-10-08 09:32:16.139492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.139497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:24.703 [2024-10-08 09:32:16.139503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:24.703 [2024-10-08 09:32:16.139509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.139548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:24.703 [2024-10-08 09:32:16.139556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:24.703 [2024-10-08 09:32:16.139566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:24.703 [2024-10-08 09:32:16.139571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:24.703 [2024-10-08 09:32:16.140420] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1596.637 ms, result 0 00:26:24.703 [2024-10-08 09:32:16.153340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.703 [2024-10-08 09:32:16.169327] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:24.703 [2024-10-08 09:32:16.177433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:24.703 Validate MD5 checksum, iteration 1 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:24.703 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:24.704 09:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:24.704 [2024-10-08 09:32:16.277728] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:26:24.704 [2024-10-08 09:32:16.277952] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80187 ] 00:26:24.965 [2024-10-08 09:32:16.424751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.965 [2024-10-08 09:32:16.627034] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.881  [2024-10-08T09:32:19.130Z] Copying: 609/1024 [MB] (609 MBps) [2024-10-08T09:32:20.508Z] Copying: 1024/1024 [MB] (average 567 MBps) 00:26:28.825 00:26:28.825 09:32:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:28.825 09:32:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:30.729 09:32:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:30.729 09:32:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=76d5a46120684c226c2655ad465da8e4 00:26:30.729 Validate MD5 checksum, iteration 2 00:26:30.729 09:32:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 76d5a46120684c226c2655ad465da8e4 != \7\6\d\5\a\4\6\1\2\0\6\8\4\c\2\2\6\c\2\6\5\5\a\d\4\6\5\d\a\8\e\4 ]] 00:26:30.729 09:32:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:30.729 09:32:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:30.729 09:32:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:30.729 09:32:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:30.729 09:32:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:30.729 09:32:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:30.729 09:32:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:30.729 09:32:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:30.729 09:32:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:30.729 [2024-10-08 09:32:22.178592] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:26:30.729 [2024-10-08 09:32:22.178691] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80254 ] 00:26:30.729 [2024-10-08 09:32:22.322347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.988 [2024-10-08 09:32:22.487326] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.365  [2024-10-08T09:32:24.619Z] Copying: 746/1024 [MB] (746 MBps) [2024-10-08T09:32:26.537Z] Copying: 1024/1024 [MB] (average 719 MBps) 00:26:34.854 00:26:35.114 09:32:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:35.114 09:32:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=315c0d1977d9a792de2fbbb5d6a2db0e 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 315c0d1977d9a792de2fbbb5d6a2db0e != \3\1\5\c\0\d\1\9\7\7\d\9\a\7\9\2\d\e\2\f\b\b\b\5\d\6\a\2\d\b\0\e ]] 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80152 ]] 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80152 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80152 ']' 00:26:37.028 09:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80152 00:26:37.289 09:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:37.289 09:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:37.289 09:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80152 00:26:37.289 09:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:37.289 killing process with pid 80152 00:26:37.289 09:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:37.289 09:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80152' 00:26:37.289 09:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80152 00:26:37.289 09:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80152 00:26:37.860 [2024-10-08 09:32:29.535851] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:38.123 [2024-10-08 09:32:29.551944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.123 [2024-10-08 09:32:29.552009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:38.123 [2024-10-08 09:32:29.552029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:38.123 [2024-10-08 09:32:29.552042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.123 [2024-10-08 09:32:29.552069] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:38.123 [2024-10-08 09:32:29.555409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.123 [2024-10-08 09:32:29.555494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:38.123 [2024-10-08 09:32:29.555510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.324 ms 00:26:38.123 [2024-10-08 09:32:29.555519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.123 [2024-10-08 09:32:29.555804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.123 [2024-10-08 09:32:29.555818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:38.123 [2024-10-08 09:32:29.555828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:26:38.123 [2024-10-08 09:32:29.555839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.123 [2024-10-08 09:32:29.558119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.123 [2024-10-08 09:32:29.558280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:38.123 [2024-10-08 09:32:29.558317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.251 ms 00:26:38.123 [2024-10-08 09:32:29.558344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.123 [2024-10-08 09:32:29.562312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.123 [2024-10-08 09:32:29.562712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:38.123 [2024-10-08 09:32:29.562760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.809 ms 00:26:38.123 [2024-10-08 09:32:29.562784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.123 [2024-10-08 09:32:29.575413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.123 [2024-10-08 09:32:29.575493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:38.123 [2024-10-08 09:32:29.575509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.403 ms 00:26:38.123 [2024-10-08 09:32:29.575518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.123 [2024-10-08 09:32:29.581640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.123 [2024-10-08 09:32:29.581826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:38.123 [2024-10-08 09:32:29.581849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.072 ms 00:26:38.123 [2024-10-08 09:32:29.581858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.123 [2024-10-08 09:32:29.582056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.123 [2024-10-08 09:32:29.582094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:38.123 [2024-10-08 09:32:29.582105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:26:38.123 [2024-10-08 09:32:29.582114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.123 [2024-10-08 09:32:29.592481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.123 [2024-10-08 09:32:29.592666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:38.123 [2024-10-08 09:32:29.592686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.347 ms 00:26:38.123 [2024-10-08 09:32:29.592694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.123 [2024-10-08 09:32:29.603900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.123 [2024-10-08 09:32:29.604111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:38.123 [2024-10-08 09:32:29.604134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.870 ms 00:26:38.123 [2024-10-08 09:32:29.604143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.123 [2024-10-08 09:32:29.614748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.123 [2024-10-08 09:32:29.614939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:38.123 [2024-10-08 09:32:29.614960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.484 ms 00:26:38.123 [2024-10-08 09:32:29.614969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.123 [2024-10-08 09:32:29.625570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.123 [2024-10-08 09:32:29.625779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:38.123 [2024-10-08 09:32:29.625804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.219 ms 00:26:38.123 [2024-10-08 09:32:29.625812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.123 [2024-10-08 09:32:29.625932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:38.123 [2024-10-08 09:32:29.625968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:38.123 [2024-10-08 09:32:29.625980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:38.124 [2024-10-08 09:32:29.625989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:38.124 [2024-10-08 09:32:29.625998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:38.124 [2024-10-08 09:32:29.626115] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:38.124 [2024-10-08 09:32:29.626123] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a24f5ea0-0ee9-4699-bd85-513426d27a1f 00:26:38.124 [2024-10-08 09:32:29.626132] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:38.124 [2024-10-08 09:32:29.626140] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:38.124 [2024-10-08 09:32:29.626148] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:38.124 [2024-10-08 09:32:29.626165] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:38.124 [2024-10-08 09:32:29.626173] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:38.124 [2024-10-08 09:32:29.626181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:38.124 [2024-10-08 09:32:29.626189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:38.124 [2024-10-08 09:32:29.626196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:38.124 [2024-10-08 09:32:29.626203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:38.124 [2024-10-08 09:32:29.626212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.124 [2024-10-08 09:32:29.626225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:38.124 [2024-10-08 09:32:29.626237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:26:38.124 [2024-10-08 09:32:29.626246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.124 [2024-10-08 09:32:29.640148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.124 [2024-10-08 09:32:29.640204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:38.124 [2024-10-08 09:32:29.640216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.880 ms 00:26:38.124 [2024-10-08 09:32:29.640225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.124 [2024-10-08 09:32:29.640672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.124 [2024-10-08 09:32:29.640684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:38.124 [2024-10-08 09:32:29.640694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.422 ms 00:26:38.124 [2024-10-08 09:32:29.640703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.124 [2024-10-08 09:32:29.682544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.124 [2024-10-08 09:32:29.682599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:38.124 [2024-10-08 09:32:29.682611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.124 [2024-10-08 09:32:29.682620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.124 [2024-10-08 09:32:29.682663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.124 [2024-10-08 09:32:29.682671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:38.124 [2024-10-08 09:32:29.682680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.124 [2024-10-08 09:32:29.682689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.124 [2024-10-08 09:32:29.682770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.124 [2024-10-08 09:32:29.682782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:38.124 [2024-10-08 09:32:29.682796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.124 [2024-10-08 09:32:29.682805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.124 [2024-10-08 09:32:29.682824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.124 [2024-10-08 09:32:29.682832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:38.124 [2024-10-08 09:32:29.682842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.124 [2024-10-08 09:32:29.682850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.124 [2024-10-08 09:32:29.767266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.124 [2024-10-08 09:32:29.767575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:38.124 [2024-10-08 09:32:29.767599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.124 [2024-10-08 09:32:29.767608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.385 [2024-10-08 09:32:29.837039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.385 [2024-10-08 09:32:29.837097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:38.385 [2024-10-08 09:32:29.837109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.385 [2024-10-08 09:32:29.837118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.385 [2024-10-08 09:32:29.837219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.385 [2024-10-08 09:32:29.837230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:38.385 [2024-10-08 09:32:29.837239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.385 [2024-10-08 09:32:29.837255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.385 [2024-10-08 09:32:29.837302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.385 [2024-10-08 09:32:29.837312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:38.385 [2024-10-08 09:32:29.837325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.385 [2024-10-08 09:32:29.837333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.385 [2024-10-08 09:32:29.837471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.385 [2024-10-08 09:32:29.837483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:38.385 [2024-10-08 09:32:29.837492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.385 [2024-10-08 09:32:29.837501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.385 [2024-10-08 09:32:29.837554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.385 [2024-10-08 09:32:29.837565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:38.385 [2024-10-08 09:32:29.837573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.385 [2024-10-08 09:32:29.837581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.385 [2024-10-08 09:32:29.837627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.385 [2024-10-08 09:32:29.837637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:38.385 [2024-10-08 09:32:29.837645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.385 [2024-10-08 09:32:29.837654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.385 [2024-10-08 09:32:29.837711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.385 [2024-10-08 09:32:29.837722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:38.385 [2024-10-08 09:32:29.837732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.385 [2024-10-08 09:32:29.837741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.385 [2024-10-08 09:32:29.837884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 285.903 ms, result 0 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:39.327 Remove shared memory files 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79931 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:39.327 ************************************ 00:26:39.327 END TEST ftl_upgrade_shutdown 00:26:39.327 ************************************ 00:26:39.327 00:26:39.327 real 1m23.887s 00:26:39.327 user 1m55.092s 00:26:39.327 sys 0m19.667s 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:39.327 09:32:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:39.327 Process with pid 72639 is not found 00:26:39.327 09:32:30 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:26:39.327 09:32:30 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:26:39.327 09:32:30 ftl -- ftl/ftl.sh@14 -- # killprocess 72639 00:26:39.327 09:32:30 ftl -- common/autotest_common.sh@950 -- # '[' -z 72639 ']' 00:26:39.327 09:32:30 ftl -- common/autotest_common.sh@954 -- # kill -0 72639 00:26:39.327 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72639) - No such process 00:26:39.327 09:32:30 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 72639 is not found' 00:26:39.327 09:32:30 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:26:39.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.327 09:32:30 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=80384 00:26:39.327 09:32:30 ftl -- ftl/ftl.sh@20 -- # waitforlisten 80384 00:26:39.327 09:32:30 ftl -- common/autotest_common.sh@831 -- # '[' -z 80384 ']' 00:26:39.327 09:32:30 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.327 09:32:30 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:39.327 09:32:30 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.327 09:32:30 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:39.327 09:32:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:39.327 09:32:30 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:39.327 [2024-10-08 09:32:30.992866] Starting SPDK v25.01-pre git sha1 91fca59bc / DPDK 24.03.0 initialization... 00:26:39.327 [2024-10-08 09:32:30.992974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80384 ] 00:26:39.587 [2024-10-08 09:32:31.137169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.847 [2024-10-08 09:32:31.289966] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.108 09:32:31 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:40.108 09:32:31 ftl -- common/autotest_common.sh@864 -- # return 0 00:26:40.108 09:32:31 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:40.368 nvme0n1 00:26:40.368 09:32:31 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:26:40.369 09:32:31 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:40.369 09:32:31 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:40.629 09:32:32 ftl -- ftl/common.sh@28 -- # stores=d62e482d-abab-4e0c-8d50-7f018511949c 00:26:40.629 09:32:32 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:26:40.629 09:32:32 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d62e482d-abab-4e0c-8d50-7f018511949c 00:26:40.890 09:32:32 ftl -- ftl/ftl.sh@23 -- # killprocess 80384 00:26:40.890 09:32:32 ftl -- common/autotest_common.sh@950 -- # '[' -z 80384 ']' 00:26:40.890 09:32:32 ftl -- common/autotest_common.sh@954 -- # kill -0 80384 00:26:40.890 09:32:32 ftl -- common/autotest_common.sh@955 -- # uname 00:26:40.890 09:32:32 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:40.890 09:32:32 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80384 00:26:40.890 killing process with pid 80384 00:26:40.890 09:32:32 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:40.890 09:32:32 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:40.890 09:32:32 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80384' 00:26:40.890 09:32:32 ftl -- common/autotest_common.sh@969 -- # kill 80384 00:26:40.891 09:32:32 ftl -- common/autotest_common.sh@974 -- # wait 80384 00:26:42.276 09:32:33 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:42.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:42.276 Waiting for block devices as requested 00:26:42.276 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:42.537 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:42.537 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:26:42.537 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:26:47.829 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:26:47.829 Remove shared memory files 00:26:47.829 09:32:39 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:26:47.829 09:32:39 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:47.829 09:32:39 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:26:47.829 09:32:39 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:26:47.829 09:32:39 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:26:47.829 09:32:39 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:47.829 09:32:39 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:26:47.830 ************************************ 00:26:47.830 END TEST ftl 00:26:47.830 ************************************ 00:26:47.830 00:26:47.830 real 11m10.979s 00:26:47.830 user 13m15.553s 00:26:47.830 sys 1m16.805s 00:26:47.830 09:32:39 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.830 09:32:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:47.830 09:32:39 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:47.830 09:32:39 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:47.830 09:32:39 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:26:47.830 09:32:39 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:47.830 09:32:39 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:26:47.830 09:32:39 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:47.830 09:32:39 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:47.830 09:32:39 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:26:47.830 09:32:39 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:26:47.830 09:32:39 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:26:47.830 09:32:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:47.830 09:32:39 -- common/autotest_common.sh@10 -- # set +x 00:26:47.830 09:32:39 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:26:47.830 09:32:39 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:26:47.830 09:32:39 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:26:47.830 09:32:39 -- common/autotest_common.sh@10 -- # set +x 00:26:49.215 INFO: APP EXITING 00:26:49.215 INFO: killing all VMs 00:26:49.215 INFO: killing vhost app 00:26:49.215 INFO: EXIT DONE 00:26:49.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:50.130 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:26:50.130 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:26:50.130 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:26:50.130 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:26:50.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:50.961 Cleaning 00:26:50.961 Removing: /var/run/dpdk/spdk0/config 00:26:50.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:50.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:50.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:50.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:50.961 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:50.961 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:50.961 Removing: /var/run/dpdk/spdk0 00:26:50.961 Removing: /var/run/dpdk/spdk_pid57278 00:26:50.961 Removing: /var/run/dpdk/spdk_pid57469 00:26:50.961 Removing: /var/run/dpdk/spdk_pid57676 00:26:50.961 Removing: /var/run/dpdk/spdk_pid57775 00:26:50.961 Removing: /var/run/dpdk/spdk_pid57814 00:26:50.961 Removing: /var/run/dpdk/spdk_pid57937 00:26:50.961 Removing: /var/run/dpdk/spdk_pid57955 00:26:50.961 Removing: /var/run/dpdk/spdk_pid58148 00:26:50.961 Removing: /var/run/dpdk/spdk_pid58241 00:26:50.961 Removing: /var/run/dpdk/spdk_pid58337 00:26:50.961 Removing: /var/run/dpdk/spdk_pid58443 00:26:50.961 Removing: /var/run/dpdk/spdk_pid58534 00:26:50.961 Removing: /var/run/dpdk/spdk_pid58574 00:26:50.962 Removing: /var/run/dpdk/spdk_pid58616 00:26:50.962 Removing: /var/run/dpdk/spdk_pid58686 00:26:50.962 Removing: /var/run/dpdk/spdk_pid58798 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59234 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59287 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59339 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59355 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59457 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59473 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59575 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59586 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59644 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59661 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59714 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59728 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59888 00:26:50.962 Removing: /var/run/dpdk/spdk_pid59924 00:26:50.962 Removing: /var/run/dpdk/spdk_pid60013 00:26:50.962 Removing: /var/run/dpdk/spdk_pid60191 00:26:50.962 Removing: /var/run/dpdk/spdk_pid60275 00:26:50.962 Removing: /var/run/dpdk/spdk_pid60310 00:26:50.962 Removing: /var/run/dpdk/spdk_pid60738 00:26:50.962 Removing: /var/run/dpdk/spdk_pid60836 00:26:50.962 Removing: /var/run/dpdk/spdk_pid60945 00:26:50.962 Removing: /var/run/dpdk/spdk_pid60998 00:26:50.962 Removing: /var/run/dpdk/spdk_pid61031 00:26:50.962 Removing: /var/run/dpdk/spdk_pid61115 00:26:50.962 Removing: /var/run/dpdk/spdk_pid61736 00:26:50.962 Removing: /var/run/dpdk/spdk_pid61773 00:26:50.962 Removing: /var/run/dpdk/spdk_pid62234 00:26:50.962 Removing: /var/run/dpdk/spdk_pid62332 00:26:50.962 Removing: /var/run/dpdk/spdk_pid62447 00:26:50.962 Removing: /var/run/dpdk/spdk_pid62500 00:26:50.962 Removing: /var/run/dpdk/spdk_pid62531 00:26:50.962 Removing: /var/run/dpdk/spdk_pid62556 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64391 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64528 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64532 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64544 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64583 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64587 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64599 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64638 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64642 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64654 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64699 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64703 00:26:50.962 Removing: /var/run/dpdk/spdk_pid64715 00:26:50.962 Removing: /var/run/dpdk/spdk_pid66081 00:26:50.962 Removing: /var/run/dpdk/spdk_pid66179 00:26:50.962 Removing: /var/run/dpdk/spdk_pid67579 00:26:50.962 Removing: /var/run/dpdk/spdk_pid68961 00:26:50.962 Removing: /var/run/dpdk/spdk_pid69043 00:26:50.962 Removing: /var/run/dpdk/spdk_pid69131 00:26:50.962 Removing: /var/run/dpdk/spdk_pid69208 00:26:50.962 Removing: /var/run/dpdk/spdk_pid69307 00:26:50.962 Removing: /var/run/dpdk/spdk_pid69387 00:26:50.962 Removing: /var/run/dpdk/spdk_pid69530 00:26:50.962 Removing: /var/run/dpdk/spdk_pid69888 00:26:50.962 Removing: /var/run/dpdk/spdk_pid69919 00:26:50.962 Removing: /var/run/dpdk/spdk_pid70362 00:26:50.962 Removing: /var/run/dpdk/spdk_pid70549 00:26:50.962 Removing: /var/run/dpdk/spdk_pid70642 00:26:50.962 Removing: /var/run/dpdk/spdk_pid70762 00:26:50.962 Removing: /var/run/dpdk/spdk_pid70807 00:26:50.962 Removing: /var/run/dpdk/spdk_pid70838 00:26:50.962 Removing: /var/run/dpdk/spdk_pid71159 00:26:50.962 Removing: /var/run/dpdk/spdk_pid71214 00:26:50.962 Removing: /var/run/dpdk/spdk_pid71290 00:26:50.962 Removing: /var/run/dpdk/spdk_pid71687 00:26:50.962 Removing: /var/run/dpdk/spdk_pid71834 00:26:50.962 Removing: /var/run/dpdk/spdk_pid72639 00:26:50.962 Removing: /var/run/dpdk/spdk_pid72771 00:26:50.962 Removing: /var/run/dpdk/spdk_pid72941 00:26:50.962 Removing: /var/run/dpdk/spdk_pid73033 00:26:50.962 Removing: /var/run/dpdk/spdk_pid73336 00:26:50.962 Removing: /var/run/dpdk/spdk_pid73574 00:26:50.962 Removing: /var/run/dpdk/spdk_pid73912 00:26:50.962 Removing: /var/run/dpdk/spdk_pid74094 00:26:50.962 Removing: /var/run/dpdk/spdk_pid74191 00:26:50.962 Removing: /var/run/dpdk/spdk_pid74244 00:26:50.962 Removing: /var/run/dpdk/spdk_pid74337 00:26:50.962 Removing: /var/run/dpdk/spdk_pid74362 00:26:50.962 Removing: /var/run/dpdk/spdk_pid74420 00:26:50.962 Removing: /var/run/dpdk/spdk_pid74581 00:26:50.962 Removing: /var/run/dpdk/spdk_pid74800 00:26:51.222 Removing: /var/run/dpdk/spdk_pid75058 00:26:51.222 Removing: /var/run/dpdk/spdk_pid75328 00:26:51.222 Removing: /var/run/dpdk/spdk_pid75597 00:26:51.222 Removing: /var/run/dpdk/spdk_pid76349 00:26:51.222 Removing: /var/run/dpdk/spdk_pid76501 00:26:51.222 Removing: /var/run/dpdk/spdk_pid76589 00:26:51.222 Removing: /var/run/dpdk/spdk_pid77012 00:26:51.222 Removing: /var/run/dpdk/spdk_pid77071 00:26:51.222 Removing: /var/run/dpdk/spdk_pid78007 00:26:51.222 Removing: /var/run/dpdk/spdk_pid78594 00:26:51.222 Removing: /var/run/dpdk/spdk_pid79383 00:26:51.222 Removing: /var/run/dpdk/spdk_pid79505 00:26:51.222 Removing: /var/run/dpdk/spdk_pid79555 00:26:51.222 Removing: /var/run/dpdk/spdk_pid79608 00:26:51.222 Removing: /var/run/dpdk/spdk_pid79668 00:26:51.222 Removing: /var/run/dpdk/spdk_pid79733 00:26:51.222 Removing: /var/run/dpdk/spdk_pid79931 00:26:51.222 Removing: /var/run/dpdk/spdk_pid80024 00:26:51.222 Removing: /var/run/dpdk/spdk_pid80091 00:26:51.222 Removing: /var/run/dpdk/spdk_pid80152 00:26:51.222 Removing: /var/run/dpdk/spdk_pid80187 00:26:51.222 Removing: /var/run/dpdk/spdk_pid80254 00:26:51.222 Removing: /var/run/dpdk/spdk_pid80384 00:26:51.222 Clean 00:26:51.222 09:32:42 -- common/autotest_common.sh@1451 -- # return 0 00:26:51.222 09:32:42 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:26:51.222 09:32:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:51.222 09:32:42 -- common/autotest_common.sh@10 -- # set +x 00:26:51.222 09:32:42 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:26:51.222 09:32:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:51.222 09:32:42 -- common/autotest_common.sh@10 -- # set +x 00:26:51.222 09:32:42 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:51.222 09:32:42 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:51.222 09:32:42 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:51.222 09:32:42 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:26:51.222 09:32:42 -- spdk/autotest.sh@394 -- # hostname 00:26:51.222 09:32:42 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:51.482 geninfo: WARNING: invalid characters removed from testname! 00:27:18.066 09:33:07 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:19.452 09:33:10 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:21.999 09:33:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:24.540 09:33:15 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:26.453 09:33:17 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:28.430 09:33:19 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:30.342 09:33:21 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:30.342 09:33:21 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:27:30.342 09:33:21 -- common/autotest_common.sh@1681 -- $ lcov --version 00:27:30.342 09:33:21 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:27:30.342 09:33:21 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:27:30.342 09:33:21 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:27:30.342 09:33:21 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:27:30.342 09:33:21 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:27:30.342 09:33:21 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:30.342 09:33:21 -- scripts/common.sh@336 -- $ read -ra ver1 00:27:30.342 09:33:21 -- scripts/common.sh@337 -- $ IFS=.-: 00:27:30.342 09:33:21 -- scripts/common.sh@337 -- $ read -ra ver2 00:27:30.342 09:33:21 -- scripts/common.sh@338 -- $ local 'op=<' 00:27:30.342 09:33:21 -- scripts/common.sh@340 -- $ ver1_l=2 00:27:30.342 09:33:21 -- scripts/common.sh@341 -- $ ver2_l=1 00:27:30.342 09:33:21 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:27:30.342 09:33:21 -- scripts/common.sh@344 -- $ case "$op" in 00:27:30.342 09:33:21 -- scripts/common.sh@345 -- $ : 1 00:27:30.342 09:33:21 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:27:30.342 09:33:21 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.342 09:33:21 -- scripts/common.sh@365 -- $ decimal 1 00:27:30.342 09:33:21 -- scripts/common.sh@353 -- $ local d=1 00:27:30.342 09:33:21 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:30.342 09:33:21 -- scripts/common.sh@355 -- $ echo 1 00:27:30.342 09:33:21 -- scripts/common.sh@365 -- $ ver1[v]=1 00:27:30.342 09:33:21 -- scripts/common.sh@366 -- $ decimal 2 00:27:30.342 09:33:21 -- scripts/common.sh@353 -- $ local d=2 00:27:30.342 09:33:21 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:30.342 09:33:21 -- scripts/common.sh@355 -- $ echo 2 00:27:30.342 09:33:21 -- scripts/common.sh@366 -- $ ver2[v]=2 00:27:30.342 09:33:21 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:27:30.342 09:33:21 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:27:30.342 09:33:21 -- scripts/common.sh@368 -- $ return 0 00:27:30.342 09:33:21 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.342 09:33:21 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:27:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.342 --rc genhtml_branch_coverage=1 00:27:30.342 --rc genhtml_function_coverage=1 00:27:30.342 --rc genhtml_legend=1 00:27:30.342 --rc geninfo_all_blocks=1 00:27:30.342 --rc geninfo_unexecuted_blocks=1 00:27:30.342 00:27:30.342 ' 00:27:30.342 09:33:21 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:27:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.342 --rc genhtml_branch_coverage=1 00:27:30.342 --rc genhtml_function_coverage=1 00:27:30.342 --rc genhtml_legend=1 00:27:30.342 --rc geninfo_all_blocks=1 00:27:30.342 --rc geninfo_unexecuted_blocks=1 00:27:30.342 00:27:30.342 ' 00:27:30.342 09:33:21 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:27:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.342 --rc genhtml_branch_coverage=1 00:27:30.342 --rc genhtml_function_coverage=1 00:27:30.342 --rc genhtml_legend=1 00:27:30.342 --rc geninfo_all_blocks=1 00:27:30.342 --rc geninfo_unexecuted_blocks=1 00:27:30.342 00:27:30.342 ' 00:27:30.342 09:33:21 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:27:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.342 --rc genhtml_branch_coverage=1 00:27:30.342 --rc genhtml_function_coverage=1 00:27:30.342 --rc genhtml_legend=1 00:27:30.342 --rc geninfo_all_blocks=1 00:27:30.342 --rc geninfo_unexecuted_blocks=1 00:27:30.342 00:27:30.342 ' 00:27:30.342 09:33:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:30.342 09:33:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:27:30.342 09:33:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:30.342 09:33:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.342 09:33:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.342 09:33:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.342 09:33:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.342 09:33:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.342 09:33:21 -- paths/export.sh@5 -- $ export PATH 00:27:30.342 09:33:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.342 09:33:21 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:30.343 09:33:21 -- common/autobuild_common.sh@486 -- $ date +%s 00:27:30.343 09:33:21 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728380001.XXXXXX 00:27:30.343 09:33:21 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728380001.aZVXR9 00:27:30.343 09:33:21 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:27:30.343 09:33:21 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:27:30.343 09:33:21 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:30.343 09:33:21 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:30.343 09:33:21 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:30.343 09:33:21 -- common/autobuild_common.sh@502 -- $ get_config_params 00:27:30.343 09:33:21 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:27:30.343 09:33:21 -- common/autotest_common.sh@10 -- $ set +x 00:27:30.343 09:33:21 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:27:30.343 09:33:21 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:27:30.343 09:33:21 -- pm/common@17 -- $ local monitor 00:27:30.343 09:33:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:30.343 09:33:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:30.343 09:33:21 -- pm/common@25 -- $ sleep 1 00:27:30.343 09:33:21 -- pm/common@21 -- $ date +%s 00:27:30.343 09:33:21 -- pm/common@21 -- $ date +%s 00:27:30.343 09:33:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728380001 00:27:30.343 09:33:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728380001 00:27:30.343 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728380001_collect-vmstat.pm.log 00:27:30.343 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728380001_collect-cpu-load.pm.log 00:27:31.284 09:33:22 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:27:31.284 09:33:22 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:27:31.284 09:33:22 -- spdk/autopackage.sh@14 -- $ timing_finish 00:27:31.284 09:33:22 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:31.284 09:33:22 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:31.284 09:33:22 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:31.545 09:33:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:31.545 09:33:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:27:31.545 09:33:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:27:31.545 09:33:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:31.545 09:33:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:27:31.545 09:33:22 -- pm/common@44 -- $ pid=82084 00:27:31.545 09:33:22 -- pm/common@50 -- $ kill -TERM 82084 00:27:31.545 09:33:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:31.545 09:33:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:27:31.545 09:33:22 -- pm/common@44 -- $ pid=82085 00:27:31.545 09:33:22 -- pm/common@50 -- $ kill -TERM 82085 00:27:31.545 + [[ -n 5027 ]] 00:27:31.545 + sudo kill 5027 00:27:31.557 [Pipeline] } 00:27:31.574 [Pipeline] // timeout 00:27:31.579 [Pipeline] } 00:27:31.590 [Pipeline] // stage 00:27:31.595 [Pipeline] } 00:27:31.607 [Pipeline] // catchError 00:27:31.618 [Pipeline] stage 00:27:31.621 [Pipeline] { (Stop VM) 00:27:31.633 [Pipeline] sh 00:27:31.918 + vagrant halt 00:27:34.458 ==> default: Halting domain... 00:27:39.758 [Pipeline] sh 00:27:40.043 + vagrant destroy -f 00:27:42.586 ==> default: Removing domain... 00:27:43.172 [Pipeline] sh 00:27:43.456 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:27:43.467 [Pipeline] } 00:27:43.482 [Pipeline] // stage 00:27:43.487 [Pipeline] } 00:27:43.501 [Pipeline] // dir 00:27:43.506 [Pipeline] } 00:27:43.520 [Pipeline] // wrap 00:27:43.526 [Pipeline] } 00:27:43.539 [Pipeline] // catchError 00:27:43.549 [Pipeline] stage 00:27:43.552 [Pipeline] { (Epilogue) 00:27:43.565 [Pipeline] sh 00:27:43.850 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:49.133 [Pipeline] catchError 00:27:49.135 [Pipeline] { 00:27:49.148 [Pipeline] sh 00:27:49.468 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:49.468 Artifacts sizes are good 00:27:49.478 [Pipeline] } 00:27:49.518 [Pipeline] // catchError 00:27:49.530 [Pipeline] archiveArtifacts 00:27:49.538 Archiving artifacts 00:27:49.630 [Pipeline] cleanWs 00:27:49.642 [WS-CLEANUP] Deleting project workspace... 00:27:49.642 [WS-CLEANUP] Deferred wipeout is used... 00:27:49.650 [WS-CLEANUP] done 00:27:49.652 [Pipeline] } 00:27:49.668 [Pipeline] // stage 00:27:49.673 [Pipeline] } 00:27:49.687 [Pipeline] // node 00:27:49.692 [Pipeline] End of Pipeline 00:27:49.731 Finished: SUCCESS